All posts by Algorithmia

Best Practices in Machine Learning Infrastructure

Topographic map with binary tree

Developing processes for integrating machine learning within an organization’s existing computational infrastructure remains a challenge for which robust industry standards do not yet exist. But companies are increasingly realizing that the development of an infrastructure that supports the seamless training, testing, and deployment of models at enterprise scale is as important to long-term viability as the models themselves. 

Small companies, however, struggle to compete against large organizations that have the resources to pour into the large, modular teams and processes of internal tool development that are often necessary to produce robust machine learning pipelines.

Luckily, there are some universal best practices for achieving successful machine learning model rollout for a company of any size and means. 

The Typical Software Development Workflow

Although DevOps is a relatively new subfield of software development, accepted procedures have already begun to arise. A typical software development workflow usually looks something like this:

Software Development Workflow: Develop > Build > Test > Deploy

This is relatively straightforward and works quite well as a standard benchmark for the software development process. However, the multidisciplinary nature of machine learning introduces a unique set of challenges that traditional software development procedures weren’t designed to address. 

Machine Learning Infrastructure Development

If you were to visualize the process of creating a machine learning model from conception to production, it might have multiple tracks and look something like these:

Machine Learning Development Life Cycle

Data Ingestion

It all starts with data.

Even more important to a machine learning workflow’s success than the model itself is the quality of the data it ingests. For this reason, organizations that understand the importance of high-quality data put an incredible amount of effort into architecting their data platforms. First and foremost, they invest in scalable storage solutions, be they on the cloud or in local databases. Popular options include Azure Blob, Amazon S3, DynamoDB, Cassandra, and Hadoop.

Often finding data that conforms well to a given machine learning problem can be difficult. Sometimes datasets exist, but are not commercially licensed. In this case, companies will need to establish their own data curation pipelines whether by soliciting data through customer outreach or through a third-party service.

Once data has been cleaned, visualized, and selected for training, it needs to be transformed into a numerical representation so that it can be used as input for a model. This process is called vectorization. The selection process for determining what’s important in the dataset for training is called featurization. While featurization is more of an art then a science, many machine learning tasks possess associated featurization methods that are commonly used in practice.

Since common featurizations exist and generating these features for a given dataset takes time, it behooves organizations to implement their own feature stores as part of their machine learning pipelines. Simply put, a feature store is just a common library of featurizations that can be applied to data of a given type. 

Having this library accessible across teams allows practitioners to set up their models in standardized ways, thus aiding reproducibility and sharing between groups.

Model Selection

Current guides to machine learning tend to focus on standard algorithms and model types and how they can best be applied to solve a given business problem. 

Selecting the type of model to use when confronted with a business problem can often be a laborious task. Practitioners tend to make a choice informed by the existing literature and their first-hand experience about which models they’d like to try first. 

There are some general rules of thumb that help guide this process. For example, Convolutional Neural Networks tend to perform quite well on image recognition and text classification, LSTMs and GRUs are among the go-to choices for sequence prediction and language modeling, and encoder-decoder architectures excel on translation tasks.

After a model has been selected, the practitioner must then decide which tool to implement the chosen model. The interoperability of different frameworks has improved greatly in recent years due to the introduction of universal model file formats such as the Open Neural Network eXchange (ONNX), which allow for the porting of models trained in one library to be exported for use in another. 

What’s more, the advent of machine learning compilers such as Intel’s nGraph, Facebook’s Glow, or the University of Washington’s TVM promise the holy grail of being able to specify your model in a universal language of sorts and have it be compiled to seamlessly target a vast array of different platforms and hardware architectures.

Model Training

Model training constitutes one of the most time consuming and labor-intensive stages in any machine learning workflow. What’s more, the hardware and infrastructure used to train models depends greatly on the number of parameters in the model, the size of the dataset, the optimization method used, and other considerations.

In order to automate the quest for optimal hyperparameter settings, machine learning engineers often perform what’s called a grid search or hyperparameter search. This involves a sweep across parameter space that seeks to maximize some score function, often cross-validation accuracy. 

Even more advanced methods exist that focus on using Bayesian optimization or reinforcement learning to tune hyperparameters. What’s more, the field has recently seen a surge in tools focusing on automated machine learning methods, which act as black boxes used to select a semi-optimal model and hyperparameter configuration. 

After a model is trained, it should be evaluated based on performance metrics including cross-validation accuracy, precision, recall, F1 score, and AUC. This information is used to inform either further training of the same model or the next iterate in the model selection process. Like all other metrics, these should be logged in a database for future use.

Visualization

Model visualization can be integrated at any point in the machine learning pipeline, but proves especially valuable at the training and testing stages. As discussed, appropriate metrics should be visualized after each stage in the training process to ensure that the training procedure is tending towards convergence. 

Many machine learning libraries are packaged with tools that allow users to debug and investigate each step in the training process. For example, TensorFlow comes bundled with TensorBoard, a utility that allows users to apply metrics to their model, view these quantities as a function of time as the model trains, and even view each node in a neural network’s computational graph.

Model Testing

Once a model has been trained, but before deployment, it should be thoroughly tested. This is often done as part of a CI/CD pipeline. Each model should be subjected to both qualitative and quantitative unit tests. Many training datasets have corresponding test sets which consist of hand-labeled examples against which the model’s performance can be measured. If a test set does not yet exist for a given dataset, it can often be beneficial for a team to curate one. 

The model should also be applied to out-of-domain examples coming from a distribution outside of that on which the model was trained. Often, a qualitative check as to the model’s performance, obtained by cross-referencing a model’s predictions with what one would intuitively expect, can serve as a guide as to whether the model is working as hoped. 

For example, if you trained a model for text classification, you might give it the sentence “the cat walked jauntily down the street, flaunting its shiny coat” and ensure that it categorizes this as “animals” or “sass.”

Deployment

After a model has been trained and tested, it needs to be deployed in production. Current practices often push for the deploying of models as microservices, or compartmentalized packages of code that can be queried through and interact via API calls.

Successful deployment often requires building utilities and software that data scientists can use to package their code and rapidly iterate on models in an organized and robust way such that the backend and data engineers can efficiently translate the results into properly architected models that are deployed at scale. 

For traditional businesses, without sufficient in-house technological expertise, this can prove a herculean task. Even for large organizations with resources available, creating a scalable deployment solution is a dangerous, expensive commitment. Building an in-house solution like Uber’s Michelangelo just doesn’t make sense for any but a handful of companies with unique, cutting-edge ML needs that are fundamental to their business. 

Fortunately, commercial tools exist to offload this burden, providing the benefits of an in-house platform without signing the organization up for a life sentence of proprietary software development and maintenance. 

Algorithmia’s AI Layer allows users to deploy models from any framework, language, or platform and connect to most all data sources. We scale model inference on multi-cloud infrastructures with high efficiency and enable users to continuously manage the machine learning life cycle with tools to iterate, audit, secure, and govern.

No matter where you are in the machine learning life cycle, understanding each stage at the start and what tools and practices will likely yield successful results will prime your ML program for sophistication. Challenges exist at each stage, and your team should also be primed to face them. 

Face the challenges

From Crawling to Sprinting: Advances in Natural Language Processing 

Gif highlighting different Natural Language Processing branches e.g. prediction, translation

Natural language processing (NLP) is one of the fastest evolving branches in machine learning and among the most fundamental. It has applications in diplomacy, aviation, big data sentiment analysis, language translation, customer service, healthcare, policing and criminal justice, and countless other industries.

NLP is the reason we’ve been able to move from CTRL-F searches for single words or phrases to conversational interactions about the contents and meanings of long documents. We can now ask computers questions and have them answer. 

Algorithmia hosts more than 8,000 individual models, many of which are NLP models and complete tasks such as sentence parsing, text extraction and classification, as well as translation and language identification. 

Allen Institute for AI NLP Models on Algorithmia 

The Allen Institute for Artificial Intelligence (Ai2), is a non-profit created by Microsoft co-founder Paul Allen. Since its founding in 2013, Ai2 has worked to advance the state of AI research, especially in natural language applications. We are pleased to announce that we have worked with the producers of AllenNLP—one of the leading NLP libraries—to make their state-of-the-art models available with a simple API call in the Algorithmia AI Layer.

Among the algorithms new to the platform are:

  • Machine Comprehension: Input a body of text and a question based on it and get back the answer (strictly a substring of the original body of text).
  • Textual Entailment: Determine whether one statement follows logically from another
  • Semantic role labeling: Determine “who” did “what” to “whom” in a body of text

These and other algorithms are based on a collection of pre-trained models that are published on the AllenNLP website.  

Algorithmia provides an easy-to-use interface for getting answers out of these models. The underlying AllenNLP models provide a more verbose output, which is aimed at researchers who need to understand the models and debug their performance—this additional information is returned if you simply set debug=True.

The Ins and Outs of the AllenNLP Models 

Machine Comprehension: Create natural-language interfaces to extract information from text documents. 

This algorithm provides the state-of-the-art ability to answer a question based on a piece of text. It takes in a passage of text and a question based on that passage, and returns a substring of the passage that is guessed to be the correct answer.

This model could feature into the backend of a chatbot or provide customer support based on a user’s manual. It could also be used to extract structured data from textual documents, such as a collection of doctors’ reports could be turned into a table that says (for every report) the patient’s concern, what the patient should do, and when they should schedule a follow-up appointment.

Machine Comprehension screenshot

Screenshot from Machine Comprehension model on Algorithmia.


Entailment: This algorithm provides state-of-the-art natural language reasoning. It takes in a premise, expressed in natural language, and a hypothesis that may or may not follow up from. It determines whether the hypothesis follows from the premise, contradicts the premise, or is unrelated. The following is an example:

Input
The input JSON blob should have the following fields:

  • premise: a descriptive piece of text
  • hypothesis: a statement that may or may not follow from the premise of the text

Any additional fields will pass through into the AllenNLP model.

Output
The following output field will always be present:

  • contradiction: Probability that the hypothesis contradicts the premise
  • entailment: Probability that the hypothesis follows from the premise
  • neutral: Probability that the hypothesis is independent from the premise
Entailment

Screenshot from Entailment model on Algorithmia.


Semantic role labeling: This algorithm provides state-of-the-art natural language reasoning—decomposing a sentence into a structured representation of the relationships it describes.

The concept of this algorithm is considering a verb and the entities involved in it as its arguments (like logical predicates). The arguments describe who or what does the action of this verb, to whom or what it is done, etc.

Semantic role labeling

Screenshot from Semantic role labeling model on Algorithmia.

NLP Moving Forward

NLP applications are rife in everyday life, and applications will only continue to expand and improve because the possibilities of a computer understanding written and spoken human language and executing on it are endless. 

CTA for browsing NLP algorithms

Data Providers Added for Algorithmia Users

Algorithmia data providers list

Azure Blob and Google Cloud Storage

In an effort to constantly improve products for our customers, this month we introduced two additional data providers into Algorithmia’s data abstraction service: Azure Blob Storage and Google Cloud Storage. This update allows algorithm developers to read and write data without worrying about the underlying data source. Additionally, developers who consume algorithms never need to worry about passing sensitive credentials to an algorithm since Algorithmia securely brokers the connection for them.

How Easy is it?

By creating an Algorithmia account, you automatically have access to our Hosted Data Source where you can store your data or algorithm output. If you have a Dropbox, Azure Blob Storage, Google Cloud Storage, or an Amazon S3 account, you can configure a new data source to permit Algorithmia to read and write files on your behalf. All data sources have a protocol and a label that you will use to reference your data.

We create these labels because you may want to add multiple connections to the same data provider account and they will each need a unique label for later reference in your algorithm. You might want to have multiple connections to the same source so you can set different access permissions to each connection, such as read from one file and write to a different folder.

We’re Flexible

These providers are available now in addition to Amazon S3, Dropbox, and the Algorithmia Hosted Data service. These options will provide our users with even more flexibility when incorporating Algorithmia’s services into their infrastructures.

Learn more about how Algorithmia enables data connection on our site.

We’d love to know which other data providers developers are interested in, and we’ll keep shipping new providers in future releases. Get in touch if you have suggestions!

Finding an Algorithm Marketplace by the Marketplace

Pike Place Market

Sometimes the best advertising is a small, nondescript company name etched onto an equally nondescript door in a back alley, only accessible by foot traffic. Lucky for us, Paul Borza of TalentSort—a recruiting search engine that mines open-source code and ranks software engineers based on their skills—was curious about Algorithmia when he happened to walk by our office near Pike Place Market one day.

“It’s funny how I stumbled on Algorithmia. I was waiting for a friend of mine in front of
The Pink Door, but my friend was late so I started walking around. Next door I noticed a cool logo and the name ‘Algorithmia.’ Working in tech, I thought it must be a startup so I looked up the name and learned that Algorithmia was building an AI marketplace. It was such a coincidence!”

Paul Needed an Algorithm Marketplace

“Two weeks before I had tried monetizing my APIs on AWS but had given up because it was too cumbersome. So rather than waste my time with bad development experiences, I was willing to wait for someone else to develop a proper AI marketplace; then I stumbled upon Algorithmia.”

Paul Found Algorithmia

“I went home that day and in a few hours I managed to publish two of my machine learning models on Algorithmia. It was such a breeze! Publishing something similar on AWS would have taken at least a week.”

We asked Paul what made his experience using Algorithmia’s marketplace so easy:

“Before I started publishing algorithms, I wanted to see if Algorthmia fit our company’s needs. The “Run an Example” feature was helpful in assessing the quality of an algorithm on the website; no code required. I loved the experience as a potential customer.”

“To create an API, I started the process on the Algorithmia website. Each API has its own git repository with some initial boilerplate code. I cloned that repository and added my code to the empty function that was part of the boilerplate code, and that was it! The algorithm was up and running on the Algorithmia platform. Then I added a description, a default JSON example, and documentation via Markdown.”

“The beauty of Algorithmia is that as a developer, you only care about the code. And that’s what I wanted to focus on: the code, not the customer sign-up or billing process. And Algorithmia allowed me to do that.”

Paul is Smart; Be like Paul

Paul’s algorithms are the building blocks of TalentSort; they enable customers to improve their recruiting efficiency. The models are trained on 1.5 million names from more than 30 countries and have an accuracy rate of more than 95 percent at determining country of origin and gender. Also, the algorithms don’t call into any other external service, so there’s no data leakage. Try them out in the Algorithmia marketplace today:

Gender By Name

Origin By Name

Paul’s relentless curiosity led him to Algorithmia’s marketplace where his tools became part of more than 7,000 unique algorithms available for use now.

Flexibility, Scale, and Algorithmia’s Edge in Machine Learning

At Algorithmia, we’ve always been maniacally focused on the deployment of machine learning models at scale. Our research shows that deploying algorithms is the main challenge for most organizations exploring how machine learning can optimize their business.

In a survey we conducted this year, more than 500 business decision makers said that their data science and machine learning teams spent less than 25% of their time on training and iterating models. Most organizations get stuck deploying and productionizing their machine learning models at scale.

Machine Learning in Enterprise research results

The challenge of productionizing models at scale comes late in the lifecycle of enterprise machine learning but is often critical to getting a return on investment on AI. Being able to support heterogeneous hardware, conduct versioning of models, and run model evaluations is underappreciated until problems crop up from not having taken these steps.

Recent Trends
At the AWS re:Invent conference in Las Vegas this week, Amazon announced several updates to SageMaker, its machine learning service. Notable were mentions of forthcoming forecast models, a tool for building datasets to train models, an inference service for cost savings, and a small algorithm marketplace to—as AWS describes—“put [machine learning] in the hands of every developer.”

“What AWS just did was cement the notion that discoverability and accessibility of AI models are key to success and adoption at the industry level, and offering more marketplaces and options to customers is what will ultimately drive the advancement
of AI.”

–Kenny Daniel, CTO, Algorithmia

Amazon and other cloud providers are increasing their focus on novel uses for machine learning and artificial intelligence, which is great for the industry writ large. Algorithmia will continue to provide users seamless deployment of enterprise machine learning models at scale in a flexible, multi-cloud environment.

Deploying at Scale
For machine learning to make a difference at the enterprise level, deployment at scale is critical and making post-production deployment of models easy is mandatory. Algorithmia has four years of experience putting customer needs first, and we focus our efforts on providing scalability, flexibility, standardization, and extensibility.

We are heading toward a world of standardization for machine learning and AI, and companies will pick and choose the tools that will make them the most successful. We may be biased, but we are confident that Algorithmia is the best enterprise platform for companies looking to get the most out of their machine learning models because of our dedication to post-production service.

Being Steadfastly Flexible
Users want to be able to select from the best tools in data labeling, training, deployment, and productionization. Standard, customizable frameworks like PyTorch and TensorFlow and common file formats like ONNX increase flexibility for users for their specific needs. Algorithmia has been preaching and executing on this for years.

Standard, customizable frameworks increase flexibility for users for their specific needs. Algorithmia has been preaching this for years.
–Kenny Daniel, CTO, Algorithmia

Algorithmia’s Commitment
For at-scale enterprise machine learning, companies need flexibility and modular applications that easily integrate with their existing infrastructure. Algorithmia hosts the largest machine learning model marketplace in the world, with more than 7,000 models, and more than 80,000 developers use our platform.

“I expect more AI marketplaces to pop up over time and each will have their strengths and weaknesses. We have been building these marketplaces inside the largest enterprises, and I see the advantages of doing this kind of build-out to accelerate widespread
AI adoption.”

–Diego Oppenheimer, CEO, Algorithmia


It is Algorithmia’s goal to remain focused on our customers’ success, pushing the machine learning industry forward. We encourage you to try out our platform, or better yet, book a demo with one of our engineers to see how Algorithmia’s AI layer is the best in class.