Algorithmia

Reducing API Overhead by 70% with Prometheus and Grafana

Effectively monitoring any system is difficult. Ideally an engineer should be able to quickly get an idea of how well the system is functioning, whether things look normal, and be notified when they are not. A graph can convey much of this information with minimal thought processing for an engineer. However, they can be difficult to create.

In early days, we leveraged parts of the billing infrastructure to aggregate API call counts, error counts, and some timing metrics. We also would record server metrics (CPU, memory, etc) in our main application database. From there we used google charts plugins to visualize the data.

BlogPostOldTiming.png

Things were… okay. As the system grew, and we wanted to know more details about system performance, we found that every additional metric took a good deal of plumbing and database setup in order to add a small chart. We’d have to ensure that the data would get truncated or rotated in order to minimize strain on the database. A chart might be created with a 3 hour window, but you might want to zoom in or see what happened last night, and our charts pages would load incredibly slowly. Additionally, sending a notification or alert was a part of application code, which meant it was time-consuming to change a threshold or pause alerting. Read More…

Video Metadata Extraction: See it in Action!

Add emotion detection to your livestreaming service.  Find all the faces in your security tapes. Detect the make and model of every car that passes your shop.  Flag nudity in users’ uploaded videos.

These are just a few of the cool things you can do with our powerful Video Metadata Extraction microservice, which allows you to analyze an entire video with our many image data-extraction algorithms.  But don’t simpy take our word for it — if you want to see how powerful this tool is, take a peek at our live demo:

Deep Dive into Parallelized Video Processing

Where it all began

At Algorithmia, one of our driving goals is to enable all developers to stand on the shoulders of the algorithmic giants. Like Lego our users can construct amazing devices and tools by utilizing our algorithmic building blocks like FaceDetection or Smart Image Downloader.

As a platform, Algorithmia is unique in that we’re able to scale to meet any volume of concurrent algorithm requests, meaning that even though your algorithm might be making 10,000 API requests to a particular image processing algorithm, it won’t influence the experience of other users quality of service.

One of the earliest projects I worked on at Algorithmia was to construct a video processing pipeline which would leverage our existing image processing algorithms as building blocks. The project was designed to improve the reach of our image processing algorithms by automatically enabling them to become video processing algorithms.

After the first couple of weeks the first Interface and process flow was starting to come together and by using ffmpeg we were able to easily split videos into frames and concatenate them back into any video format we wanted. However, it quickly became apparent how fragile this initial process flow was, and how difficult it was to use for an end user. Read More…

Video Processing of Traffic Videos

bus stop

Last week we wrote an algorithm spotlight about a video processing algorithm called Video Metadata Extraction. This week, we are pairing it up with a really cool microservice from an Algorithmia user called Car Make and Model Recognition that detects a car’s make and model from an image. This algorithm is currently in beta, but we’ve tested it out on a bunch of images and it works really well. We thought that it might be fun to take a traffic video and find the unique brands of cars that pass by a bus stop.

Before we released the Video Metadata Extraction algorithm you would have had to use the Split Video By Frames algorithm and then pass each image frame’s path to the Car Make and Model algorithm.

Now, using the Video Metadata Extraction algorithm you can do the same thing in just a few lines of code. Read More…

Introduction to Video Metadata Extraction

Last week we talked about how Video Transform was able to change the way users handled video transformation tasks. What’s even better than being able to Transform Videos at will? Getting actual, structured information out of them! This week we introduce you Video Transform’s sister, Video Metadata Extraction.

What’s the difference between Metadata Extraction and Transform?

videoMetadataExtraction.png

Video Metadata Extraction is a Rust algorithm which functions very similarly to Video Transform, however instead of utilizing algorithms that transform images, it uses algorithms that classify or extract information from images, and returns the information in a structured, timestamped json array file.

This key difference unlocks a whole universe of potential, allowing us to extract any kind of information from any video, given we have the right image processing algorithm. Read More…

See our Video Transform in Action

Recently, we introduced you to Video Transform, a meta-algorithm which can take any of our image transformations — such as Colorization, Saliency, Style Transfer, or Resolution Enhancement — and apply it to a video instead!
…but if picture is worth a thousand words, a video is worth 24-30,000 words per second (sorry, bad videography humor there). So instead of just telling you about this cool feature, we’d like to show you, with a brand new Video Toolbox demonstration!

Click here to see videos transformed before your very eyes!

Introduction to Video Transform

At Algorithmia, we have strived to develop a variety of powerful and useful image transformation algorithms that utilize cutting-edge machine learning techniques. These are the building blocks which let any developer build more complex algorithms and solve harder problems, regardless of their preferred language and development platform.

Video Transform is a direct extension of this work. It allows users to transform videos on a frame-by-frame basis, using any existing or future image transformation algorithm on the Algorithmia marketplace. Read More…

Data Science & Machine Learning Platforms for the Enterprise

G19_DataPlatformHero_2040_Medium.jpg

TL;DR A resilient Data Science Platform is a necessity to every centralized data science team within a large corporation. It helps them centralize, reuse, and productionize their models at peta scale. We’ve built Algorithmia Enterprise for that purpose.


You’ve built that R/Python/Java model. It works well. Now what?

It started with your CEO hearing about machine learning and how data is the new oil. Someone in the data warehouse team just submitted their budget for an 1PB Teradata system, and the the CIO heard that FB is using commodity storage with Hadoop, and it’s super cheap. A perfect storm is unleashed and now you have a mandate to build a data-first innovation team. You hire a group of data scientists, and everyone is excited and start coming to you for some of that digital magic to Googlify their business. Your data scientists don’t have any infrastructure and spend all their time building dashboards for the execs, but the return on investment is negative and everyone blames you for not pouring enough unicorn blood over their P&L.” – Vish Nandlall (source)

Sharing, reusing, and running models at peta-scale is not part of the data scientist’s workflow. This inefficiency is amplified in a corporate environment where data scientists need to coordinate every move with IT, continuous deployment is a mess (if not impossible), reusability is low, and the pain snowballs as different corners of the company start to “Googlify their business”. Read More…