When we implemented InceptionNet, a microservice to detect and label objects (features) in photos, we knew it would be helpful. Then, we built out VideoMetadataExtraction, a video pipeline which allows you to run feature-detection algorithms (and others) on an entire video. This allowed for some really powerful activities — like automatically scanning through home security footage to find all the cars of a specific make & model, or stripping out all the nudity-containing scenes of a movie to make a G-rated version.
Add emotion detection to your livestreaming service. Find all the faces in your security tapes. Detect the make and model of every car that passes your shop. Flag nudity in users’ uploaded videos.
These are just a few of the cool things you can do with our powerful Video Metadata Extraction microservice, which allows you to analyze an entire video with our many image data-extraction algorithms. But don’t simpy take our word for it — if you want to see how powerful this tool is, take a peek at our live demo:
Recently, we introduced you to Video Transform, a meta-algorithm which can take any of our image transformations — such as Colorization, Saliency, Style Transfer, or Resolution Enhancement — and apply it to a video instead!
…but if picture is worth a thousand words, a video is worth 24-30,000 words per second (sorry, bad videography humor there). So instead of just telling you about this cool feature, we’d like to show you, with a brand new Video Toolbox demonstration!
Once upon a time, site mappers were arcane scripts which could take hours or days to examine a single website. But, thanks to scalable & interoperable cloud algorithms, it now takes only minutes… and includes a multitude of handy features powered by machine learning: auto-tagging, summarization, page-ranking, and more!
- GetLinks recursively traverses a website of your choice, plotting them on a force-directed graph via D3
- PageRank examines the pages to create an ordered list akin to Google’s PageRank Algorithm
- Url2Text grabs the text from each page, allowing Summarizer to extract topic sentences while AutoTag generates keywords
You may already know that Algorithmia hosts scalable deep learning models. If you are a developer, you’ve seen how easy it is to run over 3,000 microservices through any of our supported languages and frameworks.
But sometimes it’s nice just to play with a simple demo.
The Deep Fashion microservice is a deep CNN, performing multi-category classification, which has been trained with humans in the loop to recognize dozens of different articles of clothing. It can be used standalone to locate specific items in an image set, or combined with a nearest-neighbors service such as KNN or Annoy to recommend similar items to online shoppers. And since the service provides bounding box coordinates for each item within the image, it could even used to censor or modify images themselves.
To see it in action, just head over to the Deep Fashion Demo, click (or upload) an image, and watch as state-of-the-art deep learning models scan the image to identify clothing and fashion items.