Algorithmia

Emergent // Future: Real-Time Parking Predictions, Improving Image Search With Deep Learning, YouTube Datasets

Emergent // Future #40Issue 40
This week we look at Google’s
real-time parking predictions, the largest dataset of annotated YouTube videos, and how Facebook is improving image search using deep learning.

Plus! What we’re reading this week and things to try at home!

🚀 Forwarded from a friend? Sign-up to Emergent // Future here.

👋 Spread the love of E//F on Twitter and Facebook


Parking Predictor 🚦

You Might Have Heard: Google launched a new feature last week for Google Maps for Android that offers predictions about parking options – like Waze for parking.

The feature uses a logistic regression machine learning model with real time crowdsourcing to provide parking difficulty information at your destination.

The model takes into account circling (driving around the block several times), difference between when a user should have arrived and when they actually did, the dispersion of parking locations at the destination, time-of-day and date.

But Did You Know… One of the most challenging research areas in machine learning is enabling computers to understand what a scene is about.

Last year Google published YouTube-8M, a dataset consisting of 8 million labelled YouTube videos.

Now, they’re releasing YouTube-BoundingBoxes, a dataset of 5 million bounding boxes that span 23 object categories from 210,000 YouTube videos.

This is the largest manually annotated video dataset with bounding boxes that track objects in temporally contiguous frames.

The dataset is designed to be big enough to train large-scale models.


Improving Image Search 🔍

Facebook has long been able to recognize people in photos. But it hasn’t been as precise at understanding what’s actually in the photos. That’s starting to change.

FB built a platform for image and video understanding called Lumos, which makes it possible to search photos based on what’s in them, rather than just by when it was taken, the tag, or location. It’s more like describing what’s in the photos by keyword.

To accomplish this, Facebook trained a deep neural network on tens of millions of photos with millions of parameters.

The model matches search descriptors to features pulled from photos and ranks its output using information from both the images and the original search. Facebook to let users search for photos using keywords to describe them, powered by its Lumos AI, which has been used to help the visually impaired

For more, check out FB’s post on building scalable systems to understand content.


What We’re Reading 📚

  • Computers are learning how to see in the rain. Precipitation poses a challenge for machine vision, which is why scientists are developing ways to edit it out. (The Outline)
  • Four questions for Geoff Hinton. He’s been referred to as the “godfather of neural networks,” but does he believe he’ll see true artificial intelligence in his lifetime? (Gigaom)
  • AI is about to learn more like humans—with a little uncertainty. You can think of this as the rise of the Bayesians, researchers that approach AI through the scientific method—beginning with a hypothesis and then updating this hypothesis based on the data—rather than just relying on the data to drive the conclusions, as neural networks do. (Wired)
  • Causality in machine learning. The focus of this post is on combining observational data with randomized data in model training, especially in a machine learning setting. The method we describe is applicable to prediction systems employed to make decisions when choosing between uncertain alternatives. (Google Data Science Blog)
  • How Google fought back against a crippling IoT-powered botnet and won. Behind the scenes defending KrebsOnSecurity against record-setting DDoS attacks. (Ars Technica)
  • Inside the 20-Year Quest to Build Computers That Play Poker. Recent breakthroughs in artificial intelligence research raise questions about the threat that bots pose to the online gambling industry. (Bloomberg)

Things To Try At Home 🛠


Emergent // Future is a weekly, hand-curated dispatch exploring technology through the lens of artificial intelligence, data science, and the shape of things to come. 

🚀 Forwarded from a friend? Sign-up to Emergent // Future here.

Follow @EmergentFuture for more on frontier technology

Lovingly curated for you by Algorithmia

Product manager at Algorithmia helping to give developers super powers.

More Posts - Website

Follow Me:
TwitterFacebookLinkedIn