Algorithmia Blog - Deploying AI at scale

Model Evaluations Beyond Compare

At Algorithmia, we strive first and foremost to meet customer needs, and we’re
releasing a new feature within the AI Layer to help you conduct model comparison.
Model Evaluations is a machine learning tool that lets you create a process for running models concurrently to gauge performance. You can test similar models against one another or compare different versions of one model using criteria you define. Model Evaluations makes comparing machine learning models in a production environment
easy and repeatable.  

If you have ever wanted to know which risk score algorithm is the best for your dataset, Model Evaluations can help. It can test models for accuracy, quality, error rates, drift, or any other performance indicator you specify. Evaluations can be created for an individual user or be organization-owned to enable collaboration across teams. Simply load a new model into the platform and run tests against your own models or those in the marketplace. We plan on making this tool part of the standard UI experience in a future release of Algorithmia Enterprise, but it is available for early access right now

Model Evaluations logo

Comparing models is important because testing and comparing models is an integral part of any development and deployment cycle. Achieve a competitive advantage over other models, build your brand’s credibility, and be certain that new versions outperform previous versions. Other benefits of Model Evaluations:

  • Improve model accuracy and performance
  • Test models before deploying
  • Conduct faster comparisons
  • Get results quicker

Sign up to get early access to our model comparison tool.

To learn more about Model Evaluations, you can find additional documentation, examples, and a step-by-step walkthrough in the Developer Center. But start here with this video we’ve put together demoing the Model Evaluations tool: 

Algorithmia is a leader in the machine learning space, and we care about building
smarter models, so please tell us
 about your experience. We’re eager to hear
your suggestions or ideas!

Model Evaluations will help data scientists compare the quality of different models or even measure the effectiveness of new versions of the same model.