Algorithmia Blog

Emergent // Future: Real-Time Parking Predictions, Improving Image Search With Deep Learning, YouTube Datasets

Issue 40
This week we look at Google’s
real-time parking predictions, the largest dataset of annotated YouTube videos, and how Facebook is improving image search using deep learning.

Plus! What we’re reading this week and things to try at home!

🚀 Forwarded from a friend? Sign-up to Emergent // Future here.

👋 Spread the love of E//F on Twitter and Facebook


Parking Predictor 🚦

You Might Have Heard: Google launched a new feature last week for Google Maps for Android that offers predictions about parking options – like Waze for parking.

The feature uses a logistic regression machine learning model with real time crowdsourcing to provide parking difficulty information at your destination.

The model takes into account circling (driving around the block several times), difference between when a user should have arrived and when they actually did, the dispersion of parking locations at the destination, time-of-day and date.

But Did You Know… One of the most challenging research areas in machine learning is enabling computers to understand what a scene is about.

Last year Google published YouTube-8M, a dataset consisting of 8 million labelled YouTube videos.

Now, they’re releasing YouTube-BoundingBoxes, a dataset of 5 million bounding boxes that span 23 object categories from 210,000 YouTube videos.

This is the largest manually annotated video dataset with bounding boxes that track objects in temporally contiguous frames.

The dataset is designed to be big enough to train large-scale models.


Improving Image Search 🔍

Facebook has long been able to recognize people in photos. But it hasn’t been as precise at understanding what’s actually in the photos. That’s starting to change.

FB built a platform for image and video understanding called Lumos, which makes it possible to search photos based on what’s in them, rather than just by when it was taken, the tag, or location. It’s more like describing what’s in the photos by keyword.

To accomplish this, Facebook trained a deep neural network on tens of millions of photos with millions of parameters.

The model matches search descriptors to features pulled from photos and ranks its output using information from both the images and the original search. Facebook to let users search for photos using keywords to describe them, powered by its Lumos AI, which has been used to help the visually impaired

For more, check out FB’s post on building scalable systems to understand content.


What We’re Reading 📚


Things To Try At Home 🛠


Emergent // Future is a weekly, hand-curated dispatch exploring technology through the lens of artificial intelligence, data science, and the shape of things to come. 

🚀 Forwarded from a friend? Sign-up to Emergent // Future here.

Follow @EmergentFuture for more on frontier technology

Lovingly curated for you by Algorithmia