Algorithmia Blog

Emergent Future Weekly: Recapping the Google Brain Team’s Reddit AMA

Issue 20
We change things up for our 20th issue with a recap of the
Google Brain Team’s Reddit AMA. The team spent nearly five hours answering questions about artificial intelligence, machine learning, deep learning, and the future of the team. 

Plus, we compile our top projects to try at home, and favorite articles from the past week.

Not a subscriber? Join the Emergent // Future newsletter here.


A Recap of the Google Brain Team AMA 🤖🖖📝

The Google Brain Team is a group of research scientists and engineers that work to improve people’s lives by creating intelligent machines. For the last five years they’ve conducted research and built systems to advance this mission.

Here are the 11 things we learned from their Reddit AMA on the future of artificial intelligence, machine learning, and data science. We’ve pulled the most insightful quotes for brevity. Click through to see the full responses in context of the user-submitted questions, and other comments.

On the Team’s Long Term Goals
We want to do research on problems that we think will help in our mission of building intelligent machines, and to use this intelligence to improve people’s lives.

On the State of Machine Learning
Exciting: anything related to deep reinforcement learning and low sample complexity algorithms for learning policies. We want intelligent agents that can quickly and easily adapt to new tasks.

Underrated: maybe not a technique, but the general problem of intelligent automated collection of training data is IMHO under-studied right now

On Backpropogation
Backpropagation has endured as the main algorithm for training neural nets since the late 1980s. This longevity, when presumably many people have tried to come up with alternatives that work better, is a reasonable sign that it will likely remain important.

On Producing Consistently High-Quality Research
Learn what times of day you are at your most creative / productive, and try to protect those times to do your most important work: inventing algorithms, coding, writing papers.

On General AI
Question: On a scale of 1-10, 10 being tomorrow and 1 being 50 years, how far away would you all estimate we are from general AI?

Answer: A 6, but I refuse to be pinned down to whether the scale is linear or logarithmic.

On Using Machine Learning to Improve People’s Lives
Developing ML techniques to improve the availability and accuracy of medical care is the single greatest opportunity for applied machine learning today.

On Training Data Required for Machine Learning / Deep Learning
Figuring out how to learn from more with less is a very exciting research area, both inside Google and in the larger research community.

On Bias
The fundamental problem is that machine learning models learn from data, and they will faithfully attempt to capture correlations that they observe in this data. Most of these correlations are fine and are what give these kinds of models their power. Some, however, reflect the “world that is” rather than “the world as we wish it was.”

On Current Machine Learning Challenges
The biggest challenge is how to build systems that can flexibly learn to accomplish many different tasks, from relatively few examples. This will require lots of work in unsupervised learning, reinforcement learning, transfer learning, and many other machine learning sub-disciplines, but is key to building the kinds of systems we want, which are not systems that are specialized to one or a handful of tasks, but rather intelligent systems or agents that can accomplish a vast array of tasks.

On Opaque Artificial Intelligence
Neural networks are tricky to understand, and developing techniques to understand them better is an incredibly important research area. There are a number of very promising directions, and we’ve seen a lot of progress (especially optimization-based feature visualization).

On the TPU (Tensor Processing Unit)
The TPU is designed to do the kinds of computations performed in deep neural nets. It’s not so specific that it only runs one specific model, but rather is well tuned for the kinds of dense numeric operations found in neural nets, like matrix multiplies and non-linear activation functions. We agree that fabricating a chip for a particular model would probably be overly specific, but that’s not what a TPU is.


What We’re Reading 📚


Try This At Home 🛠


Emergent Future is a weekly, hand-curated dispatch exploring technology through the lens of artificial intelligence, data science, and the shape of things to come. Subscribe here.