All posts by Stephanie Kim

Making Algorithms Discoverable and Composable

Just like a music producer creates a beat, then combines it with instrumentals and a baseline to form something catchy that lyrics can be applied to… developers need a way to compose algorithms together in a clean and elegant way.

Whether you’re creating a sentiment analysis pipeline for your social data or doing image processing on thousands of photos, you’ll need an easy way to combine the various tools available so you aren’t writing spaghetti code.

It isn’t always easy to combine the libraries you need. Sometimes a library or machine learning model is written in a different language than the one you’re using. Other times there might simply be a performance difference between languages which (is why we chose Rust to create a Video Metadata Extraction pipeline). And even though GitHub offers thousands of libraries, frameworks, and models to choose from, it’s sometimes difficult to find the one you need to solve your problem.

To solve these problems — and allow you to write elegant code while using machine learning models — Algorithmia provides an easy way to find, combine, and reuse models regardless of language. Each one gets a RES API endpoint, so you can mix & match them with each other and with external code. Read More…

Train a Face Recognition Model to Recognize Celebrities

Sam Trammell and Rustina Wesley

Sam Trammell and Rustina Wesley from True Blood

Earlier this week we introduced Face Recognition, a trainable model that is hosted on Algorithmia. This model enables you to train images of people that you want the model to recognize and then you can pass in unseen images to the model to get a prediction score.

The great thing about this algorithm is that you don’t have to have a huge dataset to get a high accuracy on the prediction scores of unseen images. The Face Recognition algorithm trains your data quickly using at least ten images of each person that you wish to train on. Read More…

How to Censor Faces with Video Processing Algorithms

Simon Pegg and Nick Frost faces blurred

Simon Pegg and Nick Frost from Wikimedia Commons

Earlier this week we introduced Censorface, an algorithm that finds the faces in images and then either blurs or puts a colored box over the faces to censor them. We thought it would be fun to pair it up with some of our video processing algorithms to show how you can use different algorithms together to censor a video clip when you don’t want to run the whole video.

Maybe you have some embarrassing videos that you want to share, but don’t want anyone to know it’s you! Or maybe you have a potentially viral video that you want to post on YouTube, but you need to protect the innocent. No matter what your use case is, let’s dive into creating non-nude video clips with censored faces! Read More…

Video Editing: extracting metadata from movie scenes

Film image

Recently, we wrote a blog post about an algorithm called Scene Detection that takes a video and returns the timestamps of scenes along with subclips that are associated with the subclip’s timestamps.

You can use this information to find appropriate scene lengths for creating video trailers or you can use the timestamps of scenes to dictate where YouTube can place advertisements so it doesn’t occur during an important scene.

Sometimes though, you want more than just the scene’s timestamps. With Python 3.4 and up you can use the statistics module to determine the average length of a scene, the variance of the data and other information to easily edit your videos or garner insights from the scene lengths. Although you can perform statistical calculations manually or by using the libraries Numpy or Pandas, in Python 3.4 and up you can easily find detailed information of your subclip data without importing a bunch of heavy libraries. Read More…