A resilient data science platform is a necessity to every centralized data science team within a large corporation. It helps them centralize, reuse, and productionize their models at scale. We’ve built Algorithmia Enterprise for that purpose.
You’ve built that R/Python/Java model. It works well. Now what?
Sharing, reusing, and running models at peta-scale is not part of the data scientist’s workflow. This inefficiency is amplified in a corporate environment where data scientists need to coordinate every move with IT, continuous deployment is a mess (if not impossible), reusability is low, and the pain snowballs as different corners of the company start to “Google-ify their business.”
A data science and machine learning platform is meant to bridge that need. It serves as the foundation layer on top of which three internal stakeholders collaborate: product data scientists, central data scientists, and IT infrastructure.
|Product Team||Centralized Team||Infrastructure Team|
|Small data science team embedded within different product teams. Few models, high domain knowledge.||Central data science team supporting internal corporate stakeholders. High number of models, highly technical.||IT infrastructure team responsible for supporting Big Data & Analytics solutions.|
• Move models to production
• A/B test models in production
• Augment their work by building on top of state-of-the-art models
• Reduce duplication of efforts across teams
• Identify and surface the best internal and external models
• Build up capability for AI strategy
• Run high-throughput analytics in production
• Optimize systems to reduce cloud and on-prem costs
• Monitor and enforce regulation, security, and corporate compliance measures
A data science platform serves three stakeholders: product, central, and infrastructure. It is a necessity for large corporates with complex and growing reliance on machine learning.
In this post we’ll cover:
- Who needs a data science and machine learning (DS & ML) platform?
- What is a data science and machine learning platform?
- How to differentiate platforms?
- Examples of platforms
Do you need a Data Science Platform?
It’s not for everyone. Small teams with one or two use cases are better off improvising their own solutions around sharing and scaling (or use privately hosted solutions). If you’re a central team with many internal customers, you’re likely suffering from one or more of the following symptoms:
Symptom #1 you’re splitting code bases
Your data scientist creates a model (let’s say in R or Python) and wants to plug it into production to be used as part of a web or mobile app. Your backend engineers, who built their infrastructure with Java or .NET, end up re-writing that model from scratch in their technology stack of choice. Now you have two code bases to debug and synchronize. This inefficiency multiplies as you build more models over time.
Symptom #2 you’re re-inventing the wheel
Whether it’s as small as a pre-processing function or as large as a full-blown trained model. The more your team is churning out, the more likelihood that there’s a systematic duplication of efforts between current team members, past team members, and especially projects.
Symptom #3 you’re struggling to hire the best
Every corner of your company has a data science or machine learning idea to stay ahead of the curve, but you only have few genius experts and they can only take challenges one at a time. You would hire more but data science and machine learning talent is scarce and the rockstars are as expensive as a top NFL quarterback.
Symptom #4 your cloud bill is blowing up (too many P2s!)
You have deployed your model behind a web server. In the world of deep learning you will likely want a GPU-ready machine, such as P2 instances on AWS EC2 (or Azure N-Series VMs). Running those machines for each productionized deep learning model can quickly get expensive, especially for spiky workloads or hard to predict patterns.
What is a Machine Learning Platform?
It’s about everything except for the training. A DS & ML platform is about the life of a model after the training phase. This include: the registry of your models, showing the lineage of how they progressed from one version to the next, centralizing them so other users can find them, and making them available as self-contained artifacts that are ready to be plugged into any data pipeline.
Library vs. Registry
Things like Scikit-learn and Spark MLib hold a collection of unique algorithms. That’s a library. A DS & ML platform is a registry. It contains multiple implementations of an algorithm, from different sources, and each algorithm having its own versions (or lineage) that are equally discoverable and accessible. A user of a registry will be able to easily find and compare the output of different implementations of an algorithm.
Training vs. Inference
Data scientists will use the right tool for the right problem. Sometimes those tools are a combination of Scikit-learn and Keras, an ensemble of Caffe and TensorFlow models, or an H2O.ai script written in R. A platform will not dictate the tool of the craft, but will be able to register and operationalize those models, independent of how they were trained or put together.
Manual vs. Automated Deployment
There are multiple ways to deploy a model into production, with the end result mostly being a RESTful API. The different approaches introduce many risks including inconsistent API interface design, inconsistent auth and logging, and draining devop resources. A platform should be able to automate this work with minimal steps, expose models through a consistent API and auth, and reduce the operational burden on developers.
How to differentiate Data Science & Machine Learning Platforms?
From the surface all data science platforms will sound the same but the devil is in the details. Here are some data points to compare:
R and Python are mandatory for most data science and machine learning projects. Java is a close second given libraries like deeplearning4j and H2O’s POJO model extractor. C++ is especially relevant in the context of scientific computation or HPC. Other runtimes are nice-to-haves and will depend on your use case and main technology stack used by your non-data science colleagues, such as NodeJS/Ruby/.NET.
CPUs vs. GPUs (deep learning)
The prominence of deep learning in data science and machine learning will only increase as the space matures and model zoos grow. Despite its popularity, TensorFlow has not always been backward-compatible, Caffe can require special compilation flags, and cuDNN is literally another layer of complexity to manage over your GPU clusters. Fully containerizing and productionizing heterogeneous models (in terms of code, node weights, framework, and underlying drivers) and running them over GPU architecture is a strong differentiator to a platform if not a mandatory requirement.
Single vs. Multiple Versioning
Versioning is the ability to list the lineage of a model over time and access each version independently. When models are versioned, data scientists can measure model drift over time. A single-version architecture exposes a single REST API endpoint for that model (the current stable version) and only the author is able to “switch” between models from their control panel. A multi-version architecture exposes a REST API endpoint for the stable version in addition to each previous version, making them all simultaneously available, which eliminates backward compatibility challenges and enables backend engineers to implement partial rollouts or real-time A/B testing.
Vertical vs. Horizontal Scaling
Making a model available as a REST API is not enough. Vertical scaling is deploying your model on a larger machine. Horizontal scaling is deploying your model on multiple machines. Serverless scaling, as implemented by Algorithmia Enterprise, is horizontal scaling on-demand by encapsulating your model in a dedicated container, deploying that container just-in-time across your compute cluster, and destroying it right after execution to release resources. Serverless computing brings scaling and economic benefits.
Single vs. Multi-tenant
Handling sensitive or confidential models can be a challenge when you’re sharing hardware resources. Single tenant platforms will run all production models within the same resources (machine instance, virtual memory, etc). Multi-tenant platforms deploy models as virtually siloed systems (via the use of containers or VMs per model) and might provide additional security measures such as firewall rules and audit trails.
Fixed vs. Interchangeable Data Sources
A data scientist might need to run offline data on a model from S3, while a backend engineer is concurrently running production data on the same model from HDFS. A fixed data-source platform will require the author of the model to have implemented two data connectors: HDFS, S3. An interchangeable data-source platform will require the author to implement a universal data connector, which serves as an adopter for multiple data sources, and a way to future-proof models to be compatible with whatever data source will come next. In Algorithmia Enterprise this is called the Data API.
Example DS & ML Platforms
Here are some examples of data science and machine learning platforms for enterprise, so you can decide which machine learning platform is best for you. This is by no means an exhaustive list. Feel free to leave us a comment or send us a note if you have a suggestion.
- Algorithmia Enterprise Algorithmia’s AI Layer allows enterprises to deploy models efficiently at scale. This is the best platform for scaling machine learning projects.
- Domino Data Lab Domino Data Lab is a platform for model management.
- Dataiku Dataiku is an enterprise data management software.
- Cloudera Workbench Cloudera offers a data science workbench for collaboration.
- Alteryx Alteryx is an enterprise platform for analytics.
- RapidMiner This is a data prep and ML platform.
- There are also open-source storage solutions such as Apache Hadoop.
- Building A Data Science Organization (Booz Allen Hamilton)
Many organizations believe in the power and potential of data science but are challenged in establishing a sustainable data science capability. How do organizations embed data science across their enterprise so that it can deliver the next level of organizational performance and return on investment?”
More than 40 percent of data science tasks will be automated by 2020, resulting in increased productivity and broader usage of data and analytics by citizen data scientists, according to Gartner, Inc.”