Algorithmia

Google Sheets + Algorithmia = Machine Learning Spreadsheets

Spreadsheets are an amazing tool. Whether you cut your teeth on Lotus 1-2-3, grew up with Excel, or hopped straight into the Google Docs universe, spreadsheets have been a key tool for everything from planning your personal finances to mulling over your KPIs.

But, have you ever used a spreadsheet to extract sentiment from your users’ tweets? To perform advanced outlier detection after detrending your sales numbers? Probably not, because most spreadsheet tools didn’t have the power of machine learning baked right in. Until now… Read More…

Integrating Algorithmia with Apache Spark

Intro Slide Algorithmia and Spark

A couple of weeks ago we gave a talk at the Seattle Spark Meetup about bringing together the flexibility of Algorithmia’s deep learning algorithms and Spark’s robust data processing platform. We highlighted the strengths of both platforms and covered a basic introduction on how to integrate Algorithmia with Spark’s Streaming API. In this talk you’ll see how we went from use case idea to implementation in only a few lines of code. Read More…

Reducing API Overhead by 70% with Prometheus and Grafana

Effectively monitoring any system is difficult. Ideally an engineer should be able to quickly get an idea of how well the system is functioning, whether things look normal, and be notified when they are not. A graph can convey much of this information with minimal thought processing for an engineer. However, they can be difficult to create.

In early days, we leveraged parts of the billing infrastructure to aggregate API call counts, error counts, and some timing metrics. We also would record server metrics (CPU, memory, etc) in our main application database. From there we used google charts plugins to visualize the data.

BlogPostOldTiming.png

Things were… okay. As the system grew, and we wanted to know more details about system performance, we found that every additional metric took a good deal of plumbing and database setup in order to add a small chart. We’d have to ensure that the data would get truncated or rotated in order to minimize strain on the database. A chart might be created with a 3 hour window, but you might want to zoom in or see what happened last night, and our charts pages would load incredibly slowly. Additionally, sending a notification or alert was a part of application code, which meant it was time-consuming to change a threshold or pause alerting. Read More…

Video Metadata Extraction: See it in Action!

Add emotion detection to your livestreaming service.  Find all the faces in your security tapes. Detect the make and model of every car that passes your shop.  Flag nudity in users’ uploaded videos.

These are just a few of the cool things you can do with our powerful Video Metadata Extraction microservice, which allows you to analyze an entire video with our many image data-extraction algorithms.  But don’t simpy take our word for it — if you want to see how powerful this tool is, take a peek at our live demo:

Deep Dive into Parallelized Video Processing

Where it all began

At Algorithmia, one of our driving goals is to enable all developers to stand on the shoulders of the algorithmic giants. Like Lego our users can construct amazing devices and tools by utilizing our algorithmic building blocks like FaceDetection or Smart Image Downloader.

As a platform, Algorithmia is unique in that we’re able to scale to meet any volume of concurrent algorithm requests, meaning that even though your algorithm might be making 10,000 API requests to a particular image processing algorithm, it won’t influence the experience of other users quality of service.

One of the earliest projects I worked on at Algorithmia was to construct a video processing pipeline which would leverage our existing image processing algorithms as building blocks. The project was designed to improve the reach of our image processing algorithms by automatically enabling them to become video processing algorithms.

After the first couple of weeks the first Interface and process flow was starting to come together and by using ffmpeg we were able to easily split videos into frames and concatenate them back into any video format we wanted. However, it quickly became apparent how fragile this initial process flow was, and how difficult it was to use for an end user. Read More…