This week we check out Facebook’s pursuit to create computers with intelligence on par with humans, how Google’s opening access to its speech recognition software, Amazon’s Lex is now available to any developer building conversational bots, and more. Plus our favorite reads and some projects to try at home. Read More…
Algorithmia was delighted to speak at Seattle’s Building Intelligent Applications meetup last month. We provided attendees with an introductory view of machine learning, walked through a bit of sample code, touched on deep learning, and talked about about various tools for training and deploying models.
For those who were able to attend, we wanted to send out a big “thank you!” for being a great audience. For those who weren’t able to make it, you can find our slides and notes below, and we hope to see you at the next meetup on Wednesday, April 26. Data Scientists Emre Ozdemir and Stephanie Peña will be presenting two Python-based recommender systems at Galvanize in Pioneer Square.
To come to Wednesday’s talk, RSVP via Eventbrite. To keep an eye out for future events, join the Building Intelligent Applications Meetup Group.
You may already know that Algorithmia hosts scalable deep learning models. If you are a developer, you’ve seen how easy it is to run over 3,000 microservices through any of our supported languages and frameworks.
But sometimes it’s nice just to play with a simple demo.
The Deep Fashion microservice is a deep CNN, performing multi-category classification, which has been trained with humans in the loop to recognize dozens of different articles of clothing. It can be used standalone to locate specific items in an image set, or combined with a nearest-neighbors service such as KNN or Annoy to recommend similar items to online shoppers. And since the service provides bounding box coordinates for each item within the image, it could even used to censor or modify images themselves.
To see it in action, just head over to the Deep Fashion Demo, click (or upload) an image, and watch as state-of-the-art deep learning models scan the image to identify clothing and fashion items.
This week we check out how Google taught computers to draw, what AI innovation means to Canada, how the TPU stacks up against the GPU, and more. Plus our favorite reads and some projects to try at home. Read More…
In a recent Algorithm Spotlight post we introduced the Color Scheme Extraction algorithm which retrieves the top 15 colors that best approximates the color scheme of an image.
The Color Scheme Extraction algorithm is a great way to retrieve the colors of an image quickly with a serverless API call in a few lines of code using the programming language of your choice.
This recipe allows you to pipe in several images, including ones that are a montage of other images in order to get a personalized color scheme. Read More…
This week we check out what it means to have machine learning systems with humans in the loop, how AI in the cloud is the next frontier for Amazon, Microsoft and Google, and our favorite reads and some projects to try at home.
TL;DR The most accurate machine learning systems to date are those that use a “human-in-the-loop” computing paradigm.
Though we have seen huge advances in the quality and accuracy of pure machine-driven systems, they tend to fall short of acceptable accuracy rates. The combination of machine-driven classification enhanced by human correction, on the other hand, provides a clear path forward in acceptable accuracy. Below we will describe a real-world use case of building and scaling these type of systems.
This week we check out the future of work, why Elon’s freaked out about AI, learn that we shouldn’t let AI cook for us, and how deep learning is learning to dance. Plus our favorite reads and some projects to try at home.
This week we check out Y Combinator’s new track for companies applying AI to factories, take a deep dive into the lasted autonomous car news from Nvidia and Uber, and relay our favorite reads from the week.
One of the most rewarding parts of working at Algorithmia is that we get to collaborate with amazing university researchers across the globe.
Last May, Richard Zhang, Philip Isola, and Alexei A. Efros from the University of Berkeley Vision Lab published their work “Colorful Image Colorization.” This paper describes a novel use of a convolutional neural net (learn more about deep learning) for colorizing black and white pictures.