Algorithmia Blog

Emergent // Future Weekly: Tesla Updates Autopilot, Google’s Talking Computers, How Neural Nets Work

Issue 23
This week we look at how
Tesla is upgrading Autopilot to use radar, an algorithm that mimics human speech, and learn how physics explain how neural nets work.

Plus, we compile our top projects to try at home, and favorite articles from the past week.

Not a subscriber? Join the Emergent // Future newsletter here.


Tesla Upgrades Autopilot 🚕

You Might Have Heard: Tesla is updating Autopilot, the software that powers their self-driving car option, to use radar as the primary control sensor for navigation.

Cars will no longer need a camera to confirm visual image recognition during driving, and is geared toward preventing accidents, like the fatal Model S crash.

The new software will give the cars access to six times as many radar objects using the same hardware, which is available on all cars shipped after October 2014.

Tesla’s taking advantage of the fleet of cars already on the road to dynamically learn about the positions and locations of road signs, bridges, and other stationary objects to better map roadways and hazards, and all but eliminate false-positives.

This real-time learning system is always running, whether or not Autopilot is on. Meaning, the more you drive, the more Tesla learns.

But Did You Know Tesla’s Autopiliot has driven more than 47M miles in the past six-months? They’re adding more than a million miles of driving data every day.

Conversely, Google’s self-driving car, while taking a very different approach to autonomous vehicles, has travelled just 1.5M miles in SIX YEARS.

There’s concern that Google’s car project is losing out to rivals, like Tesla and Uber.

Meanwhile, Apple has shuttered parts of its self-driving car project, laid off dozens of employees, and is rethinking its strategy.

PLUS: Baidu and Nvidia to Build Artificial Intelligence Platform for Self-Driving Cars


Talking Computers 🔊

Google’s DeepMind can now generate sound waves that mimic human voices – a 50% improvement over the existing text-to-speech technology.

They’re calling it WaveNet, which uses neural nets to generate convincing speech and music.

Researchers fed DeepMind basic recorded speech, and used a convolutional neural network to create a complex set of rules about how certain tones follow other tones in the context of speech.

DeepMind famously beat the world’s best Go player in March, and is also being used to more efficiently manage power usage in Google’s data centers. They’ve cut their electricity bill by 15%.

Read the full announcement, and listen to examples.


How Neural Nets Work 🌀

Researchers from Harvard and MIT say they’ve discovered the secret to neural networks buried in the laws of physics, where a small subset of mathematical functions describes the way the universe operates.

This is great news, because nobody quite understands why deep neural networks are so good at solving complex problems.

With physics, structures are formed through a sequence of simple steps: particles form atoms, which form molecules, cells, organisms, planets, solar systems, galaxies, etc.

Neural nets are arranged in layers, where each layer deals with a higher and higher level of abstraction and complexity.

Read the research to learn more about why deep learning works so well.


What We’re Reading 📚


Try This At Home 🛠


Emergent Future is a weekly, hand-curated dispatch exploring technology through the lens of artificial intelligence, data science, and the shape of things to come. Subscribe here.