One of the most rewarding parts of working at Algorithmia is that we get to collaborate with amazing university researchers across the globe.
Last May, Richard Zhang, Philip Isola, and Alexei A. Efros from the University of Berkeley Vision Lab published their work “Colorful Image Colorization.” This paper describes a novel use of a convolutional neural net (learn more about deep learning) for colorizing black and white pictures.
As an artist, inspiration can come from anywhere: a particular texture, a design, or even a color scheme.
Instead of spending hours painstakingly extracting the hex codes from all of the important sections of an image, what if there was a way to automatically extract the most important parts of an image?
Color Scheme extraction is able to find the most relevant colors in seconds.
Computer vision is behind some of the most interesting recent advances in technology. From algorithms that can identify skin cancer as well as dermatologists to cars that drive themselves, it’s computer vision algorithms that are behind these advances.
While CV algorithms have been around in various forms since the 1960s, it wasn’t until recently that it’s progressed to far more sophisticated levels. In particular, combining computer vision with machine learning has yielded some amazing results. Read More…
When we look at an image, it’s fairly easy to detect the horizon line.
For computers, this task is somewhat more difficult: they need to understand the basic structure of the image, locate edges which might indicate a horizon, and pare out the edges which do not matter. Fortunately, Algorithmia boils this all down to a single API call: just send your image to deep horizon, an algorithm for horizon detection, and it tells you where the horizon line is.
Single image horizon line estimation is one of the most fundamental geometric problems in computer vision. Knowledge of the horizon line – the level of the viewer’s eye – enables a wide variety of applications, like detecting pedestrians or vehicles, and adjusting the perspective of photographs.
This week we look at Google’s release of TensorFlow 1.0, what the Microsoft CEO thinks the ultimate breakthrough is, why Ford is investing $1B into AI, our top reads of the week, and things to try at home.
If you read our recent post on language detection, you already know how easy it is to use Algorithmia’s services to identify which language a given piece of text is written in.
Now let’s put that into action to perform a specific task: organizing documents into language-specific folders.
We’ll build our language detection microservice using Algorithmia’s language identification algorithm. Then, we’ll look through all the .txt and .docx files in a directory to see which language each one is written in.
This week we look at what it means to stay relevant in the AI age, how you can make your own image “enhance” button like on CSI Miami, what we’re reading this week and things to try at home!
Quick, what languages are these two sentences written in:
“Hey bana bir sorununuz olur mu?”
What about this one?
“Halló ég er með vandamál getur þú hjálpað mér?”
Not easy, right?
This week we look at Google’s real-time parking predictions, the largest dataset of annotated YouTube videos, and how Facebook is improving image search using deep learning.
Plus! What we’re reading this week and things to try at home!