Algorithmia Blog

Emergent Future Weekly: Self-Driving Taxis Debut, Voice Recognition Beats Humans, FB’s Computer Vision, and More

Issue 22
This week
self-driving taxis debut in Singapore, we learn thatspeech-to-text algorithms are now faster and more accurate than human typing. Let’s take a dive inside Apple and its AI ambitions, and check out Facebook’s open sourced computer vision algorithms.

Plus, we compile our top projects to try at home, and favorite articles from the past week.

Not a subscriber? Join the Emergent // Future newsletter here.

Self-Driving Taxis Debut in Singapore 🚗

You Might Have Heard: nuTonomy began their invite-only test of six self-driving taxis in Singapore last week. A driver and researcher will be onboard during the trial phase.

The three-year-old MIT spin-out beat Uber to the punch by mere days. Uber had previously announced plans to begin trials of its own self-driving cars in Pittsburgh by the end of August.

But Did You Know? Two top autonomous car tech companies are joining forces to ship a fully autonomous car system by 2019 that could be easily integrated by automakers.

PLUS: Nvidia Unveils Powerful New Processor for Self-Driving Cars

Voice Recognition Beats Humans At Typing 🎙

A new study found voice recognition algorithms have improved to the point where they’re significantly faster and more accurate at producing text on a mobile device than we are at typing on its keyboard.

With speech input, the English input rate was 3x faster with a 20% lower error rate than typing on a smartphone keyboard. Madarin was 2.8x faster with a 63% lower error rate!

The team used Deep Speech 2, Baidu’s deep learning system that provides end-to-end, low latency speech recognition in English and Mandarin at scale.

“Humanity was never designed to communicate by using our fingers to poke at a tiny little keyboard on a mobile phone,” Baidu chief scientist Andrew Ng says. “Speech has always been a much more natural way for humans to communicate with each other.”

Plus: How the sad state of mics are holding back Siri and Alexa, and why the billion-dollar industry hasn’t seen much improvement since the launch of the iPhone 5.

How AI Works at Apple 🤖📲

Apple opens up about its secretive approach to AI and ML in a fascinating long read by Steven Levy at The Backchannel.

Apple execs Cue, Schiller, and Federighi, as well as key Siri scientists, all weigh in on the company’s “subtle” use of ML and AI to improve its products.

The struggle comes down to how the machine learning mindset is at odds with the Apple ethos – a company that carefully controls the user experience. When engineers use machine learning, the results that emerge don’t always fit with the well-thought-out, curated experience that an Apple designer specified.

So, can Apple adjust to the modern reality that machine learning systems can themselves have a hand in product design? Read the story to draw your own conclusions.

Facebook’s Computer Vision Algorithms 🔮

Facebook is open sourcing the code for DeepMask, SharpMask, and MultiPathNet. Together, the three algorithms allow you to detect, classify, and segment objects in an image. Check out theSharpMask demo, and read the full announcement.

The code is being released by Facebook AI Research (FAIR), a team advancing the field of machine intelligence, similar to the Google Brain Team.

We’re working on adding these to Algorithmia so any developer can take advantage of these state-of-the-art algorithms without having to setup, configure, or provision servers. In the meantime, check out a few of our next generation image classifiers:

What We’re Reading 📚

Try This At Home 🛠

Emergent Future is a weekly, hand-curated dispatch exploring technology through the lens of artificial intelligence, data science, and the shape of things to come. Subscribe here.