Algorithmia Blog

Emergent // Future: AI Chips, GPUs, and Machine Learning in the Cloud

Issue 32
This week is all about AI: Intel’s
new AI processor, OpenAI and Microsoft team up, Google brings GPUs to the cloud and invests in a Montreal AI research group.

Plus our favorite articles from the past week about AI and some of the projects making us smarter.

👋 Spread the love: Twitter | Facebook


There’s AI Inside 💫

Intel unveiled a new AI processor designed to help with both training and execution of deep neural nets. They’re calling it Nervana, based on the technology originally built by the startup of the same name Intel acquired earlier this year.

Intel plans to have prototypes ready by the middle of next year, with a market-ready chip by the end of 2017.

NVIDIA is the primary supplier of GPUs, which Microsoft, Google, Facebook, and Baidu typically us to train their deep neural nets. Intel is pushing to be part of this potentially enormous market despite that the hardware used to train and execute deep neural networks is changing.

For instance, Google’s developed their TPU. Even Microsoft is evolving, moving to FPGAs, a type of programmable chip.

Plus: Intel details its vision for AI chips, but the strategy is contingent on their deal with Movidius closes


OpenAI + Microsoft 🚨

Microsoft is partnering with Elon Musk’s OpenAI to help protect humanity’s best interests – and to make Azure the primary cloud platform for their compute needs.

OpenAI will get access to the Azure N-series virtual machines and GPUs.

Microsoft says its goal is democratize AI and make it accessible to everyone.

Hmm, that sounds vaguely familiar to some company building a marketplace to democratize access to algorithmic intelligence. 🤔

OpenAI won’t use Azure exclusively, but it is moving a majority of its compute to their cloud due, in part, to the bet Microsoft has placed on the future of AI.


AI Groups at Google 🚀

Google announced a new Cloud Machine Learnings group headed by Fei-Fei Li (Stanford Artificial Intelligence Lab) and Jia Li (former head of research at Snapchat).

The news welcomes GPUs to the Google cloud, which will offer the AMD FirePro S9300 x2, as well as the NVIDIA Tesla P100 and K80 GPUs for deep learning, AI, and high-performance computing applications. This is AMD’s first move into deep learning.

Google’s also invested invested $3.4M in the Montreal Institute for Learning Algorithms, an academic fund covering three years for seven faculty members across various Montreal academic institutions.

To top that off, they’re opening a brand new deep learning and AI research group in their Montreal offices.

PLUS: Lessons learned from deploying deep learning at scale


What We’re Reading 📚


Things Making Us Smarter 🛠


Watch: TESLA Autopilot


Emergent Future is a weekly, hand-curated dispatch exploring technology through the lens of artificial intelligence, data science, and the shape of things to come. Subscribe here.

Lovingly curated for you by Algorithmia
Follow @EmergentFuture for more frontier technology posts