All posts in Blog Posts

How to version control your production Machine Learning models

Source: KDnuggets

Machine Learning is about rapid experimentation and iteration, and without keeping track of your modeling history you won’t be able to learn much. Versioning let’s you keep track of all of your models, how well they’ve done, and what hyperparameters you used to get there. This post will walk through why versioning is important, tools to get it done with, and how to version your models that go into production.

Read More…

Machine Learning and Mobile: Deploying Models on The Edge

Source: TensorFlow

Machine Learning is emerging as a serious technology just as mobile is becoming the default method of consumption, and that’s leading to some interesting possibilities. Smartphones are packing more power by the year, and some are even overtaking desktop computers in speed and reliability. That means that a lot of the Machine Learning workloads that we think of as requiring specialized, high priced hardware will soon be doable on mobile devices. This post will outline this shift and how Machine Learning can work with the new paradigm.

Read More…

Why a multi-cloud infrastructure is an important part of application and Machine Learning deployment

Source: Forgeahead

Multi-cloud is quickly becoming the de facto strategy for large companies looking to diversify their IT efforts. At Algorithmia, we deploy across multiple clouds and recommend it for Machine Learning pipelines and portfolios. This post will outline the pros and cons of a multi-cloud architecture, as well as its applicability to Machine Learning workloads.

The first thing to understand about this emerging strategy is that it’s very popular. Almost 80% of enterprises that utilize public clouds use two or more of them, and 44% of those enterprises use 3 or more. Overall, 61% of all enterprises surveyed here are using two or more cloud providers.

Read More…

Data Scientists and Deploying Machine Learning into Production: Not a Great Match

Source: Timo Elliott

Asking your Data Scientists to deploy their Machine Learning models at scale is like having your graphic designers decide which sorting algorithm to use; it’s not a good skill fit. The fact of the matter is that in 2018, the standard Data Science curriculum doesn’t prepare students for the low level infrastructure building that deployment requires. This post will walk through the knowledge base that most Data Scientists have and why it’s not a good fit for production models.

The most important thing to understand in the context of this topic is that Data Science is still evolving, and nothing is set in stone. Positions and their limits are still up in the air, most companies haven’t developed meaningful expertise in hiring properly, and the educational system is struggling to adjust to an increasingly volatile cutting edge. Expect variation and rapid change.

Read More…

Deploying Machine Learning at Scale

Software Testing geek comicSource: turnoff.us

Deploying Machine Learning models at scale is one of the most pressing challenges faced by the data science community today, and as models get more complex it’s only getting harder. The sad reality: the most common way Machine Learning gets deployed today is powerpoint slides.

We estimate that fewer than 5% of commercial data science projects make it to production. If you want to be part of that share, you need to understand how deployment works, why Machine Learning is a unique deployment problem, and how to navigate this messy ecosystem.

Read More…