This week we look at Google’s new cloud GPUs, how to deploy deep learning models in the cloud, and what applied machine learning looks like at Facebook, Pinterest and others.
When we look at an image, it’s fairly easy to detect the horizon line.
For computers, this task is somewhat more difficult: they need to understand the basic structure of the image, locate edges which might indicate a horizon, and pare out the edges which do not matter. Fortunately, Algorithmia boils this all down to a single API call: just send your image to deep horizon, an algorithm for horizon detection, and it tells you where the horizon line is.
Single image horizon line estimation is one of the most fundamental geometric problems in computer vision. Knowledge of the horizon line – the level of the viewer’s eye – enables a wide variety of applications, like detecting pedestrians or vehicles, and adjusting the perspective of photographs.
This week we look at Google’s release of TensorFlow 1.0, what the Microsoft CEO thinks the ultimate breakthrough is, why Ford is investing $1B into AI, our top reads of the week, and things to try at home.
If you read our recent post on language detection, you already know how easy it is to use Algorithmia’s services to identify which language a given piece of text is written in.
Now let’s put that into action to perform a specific task: organizing documents into language-specific folders.
We’ll build our language detection microservice using Algorithmia’s language identification algorithm. Then, we’ll look through all the .txt and .docx files in a directory to see which language each one is written in.