Last week we wrote an algorithm spotlight about a video processing algorithm called Video Metadata Extraction. This week, we are pairing it up with a really cool microservice from an Algorithmia user called Car Make and Model Recognition that detects a car’s make and model from an image. This algorithm is currently in beta, but we’ve tested it out on a bunch of images and it works really well. We thought that it might be fun to take a traffic video and find the unique brands of cars that pass by a bus stop.
Before we released the Video Metadata Extraction algorithm you would have had to use the Split Video By Frames algorithm and then pass each image frame’s path to the Car Make and Model algorithm.
Now, using the Video Metadata Extraction algorithm you can do the same thing in just a few lines of code. Read More…
You may already be familiar with Algorithmia’s nudity detector for images, but thanks to recent changes which allow us to parallel-process each frame of a video, you can now detect which segments of a video may contain nudity. We’ll use this new microservice, video nudity detection, to create a “safe-for-work” version of our video by stripping out the sections which contain nudity. Read More…
Last week we talked about how Video Transform was able to change the way users handled video transformation tasks. What’s even better than being able to Transform Videos at will? Getting actual, structured information out of them! This week we introduce you Video Transform’s sister, Video Metadata Extraction.
Video Metadata Extraction is a Rust algorithm which functions very similarly to Video Transform, however instead of utilizing algorithms that transform images, it uses algorithms that classify or extract information from images, and returns the information in a structured, timestamped json array file.
This key difference unlocks a whole universe of potential, allowing us to extract any kind of information from any video, given we have the right image processing algorithm. Read More…
This week we check out who Microsoft tapped to change the way humans interact with machines, learn why every data science team needs a centralized platform for their work, what we’re reading, and some things to try at home.
Recently, we introduced you to Video Transform, a meta-algorithm which can take any of our image transformations — such as Colorization, Saliency, Style Transfer, or Resolution Enhancement — and apply it to a video instead!
…but if picture is worth a thousand words, a video is worth 24-30,000 words per second (sorry, bad videography humor there). So instead of just telling you about this cool feature, we’d like to show you, with a brand new Video Toolbox demonstration!