All posts by Stephanie Kim

How to Censor Faces with Video Processing Algorithms

Simon Pegg and Nick Frost faces blurred

Simon Pegg and Nick Frost from Wikimedia Commons

Earlier this week we introduced Censorface, an algorithm that finds the faces in images and then either blurs or puts a colored box over the faces to censor them. We thought it would be fun to pair it up with some of our video processing algorithms to show how you can use different algorithms together to censor a video clip when you don’t want to run the whole video.

Maybe you have some embarrassing videos that you want to share, but don’t want anyone to know it’s you! Or maybe you have a potentially viral video that you want to post on YouTube, but you need to protect the innocent. No matter what your use case is, let’s dive into creating non-nude video clips with censored faces! Read More…

Video Editing: extracting metadata from movie scenes

Film image

Recently, we wrote a blog post about an algorithm called Scene Detection that takes a video and returns the timestamps of scenes along with subclips that are associated with the subclip’s timestamps.

You can use this information to find appropriate scene lengths for creating video trailers or you can use the timestamps of scenes to dictate where YouTube can place advertisements so it doesn’t occur during an important scene.

Sometimes though, you want more than just the scene’s timestamps. With Python 3.4 and up you can use the statistics module to determine the average length of a scene, the variance of the data and other information to easily edit your videos or garner insights from the scene lengths. Although you can perform statistical calculations manually or by using the libraries Numpy or Pandas, in Python 3.4 and up you can easily find detailed information of your subclip data without importing a bunch of heavy libraries. Read More…

Integrating Algorithmia with Apache Spark

Intro Slide Algorithmia and Spark

A couple of weeks ago we gave a talk at the Seattle Spark Meetup about bringing together the flexibility of Algorithmia’s deep learning algorithms and Spark’s robust data processing platform. We highlighted the strengths of both platforms and covered a basic introduction on how to integrate Algorithmia with Spark’s Streaming API. In this talk you’ll see how we went from use case idea to implementation in only a few lines of code. Read More…

Video Processing of Traffic Videos

bus stop

Last week we wrote an algorithm spotlight about a video processing algorithm called Video Metadata Extraction. This week, we are pairing it up with a really cool microservice from an Algorithmia user called Car Make and Model Recognition that detects a car’s make and model from an image. This algorithm is currently in beta, but we’ve tested it out on a bunch of images and it works really well. We thought that it might be fun to take a traffic video and find the unique brands of cars that pass by a bus stop.

Before we released the Video Metadata Extraction algorithm you would have had to use the Split Video By Frames algorithm and then pass each image frame’s path to the Car Make and Model algorithm.

Now, using the Video Metadata Extraction algorithm you can do the same thing in just a few lines of code. Read More…

Create a Custom Color Scheme From Your Favorite Website

Pink hair woman

In a recent Algorithm Spotlight post we introduced the Color Scheme Extraction algorithm which retrieves the top 15 colors that best approximates the color scheme of an image.

The Color Scheme Extraction algorithm is a great way to retrieve the colors of an image quickly with a serverless API call in a few lines of code using the programming language of your choice.

This recipe allows you to pipe in several images, including ones that are a montage of other images in order to get a personalized color scheme. Read More…