Video Processing of Traffic Videos
Last week we wrote an algorithm spotlight about a video processing algorithm called Video Metadata Extraction. This week, we are pairing it up with a really cool microservice from an Algorithmia user called Car Make and Model Recognition that detects a car’s make and model from an image. This algorithm is currently in beta, but we’ve tested it out on a bunch of images and it works really well. We thought that it might be fun to take a traffic video and find the unique brands of cars that pass by a bus stop.
Before we released the Video Metadata Extraction algorithm you would have had to use the Split Video By Frames algorithm and then pass each image frame’s path to the Car Make and Model algorithm.
Now, using the Video Metadata Extraction algorithm you can do the same thing in just a few lines of code. Read More…
Emergent // Future: Rethinking Human-Computer Interaction, Data Science Platforms for Enterprise, and more
Issue 51
This week we check out who Microsoft tapped to change the way humans interact with machines, learn why every data science team needs a centralized platform for their work, what we’re reading, and some things to try at home.
Emergent // Future – Amazon’s Echo Look, Machine Learning models at scale and more
Issue 50
This week we let Amazon “Look” at us using computer vision, check out why you want to deploy your machine learning model as a scalable, serverless microservice, and look at why machine intelligence is the key to building sustainable businesses. Plus what we’re reading this week and things to try at home.
Emergent // Future – Facebook’s AI Research, Google’s speech recognition and more
Issue 49
This week we check out Facebook’s pursuit to create computers with intelligence on par with humans, how Google’s opening access to its speech recognition software, Amazon’s Lex is now available to any developer building conversational bots, and more. Plus our favorite reads and some projects to try at home. Read More…
Emergent // Future – Google Doodles, Canada’s AI Hub, TPUs vs GPUs, and more
Issue 48
This week we check out how Google taught computers to draw, what AI innovation means to Canada, how the TPU stacks up against the GPU, and more. Plus our favorite reads and some projects to try at home. Read More…
Emergent // Future – Humans in the Loop, AI in the Cloud, and Projects at Home
Issue 47
This week we check out what it means to have machine learning systems with humans in the loop, how AI in the cloud is the next frontier for Amazon, Microsoft and Google, and our favorite reads and some projects to try at home.
Emergent // Future – Siri-ously, Neural Lace, AI #FAIL, and Deep Learning DDR
Issue 46
This week we check out the future of work, why Elon’s freaked out about AI, learn that we shouldn’t let AI cook for us, and how deep learning is learning to dance. Plus our favorite reads and some projects to try at home.
Emergent // Future – Building Robot Factories, Cars Driving Cars, and More!
Issue 45
This week we check out Y Combinator’s new track for companies applying AI to factories, take a deep dive into the lasted autonomous car news from Nvidia and Uber, and relay our favorite reads from the week.
Emergent // Future – Google Buys Kaggle, A Chatbot for Refugees, and more!
Issue 44
This week we take a look at why Google bought Kaggle, a chatbot that gives free legal aid to refugees seeking asylum in US and Canada, our favorite reads, and some things to try at home.
Emergent // Future – Cloud GPUs, Deploying Deep Learning Models, Applied ML, and more
Issue 43
This week we look at Google’s new cloud GPUs, how to deploy deep learning models in the cloud, and what applied machine learning looks like at Facebook, Pinterest and others.
