Source: Deep Ideas
If you remember anything from Calculus (not a trivial feat), it might have something to do with optimization. Finding the best numerical solution to a given problem is an important part of many branches in mathematics, and Machine Learning is no exception. Optimizers, combined with their cousin the Loss Function, are the key pieces that enable Machine Learning to work for your data.
This post will walk you through the optimization process in Machine Learning, how loss functions fit into the equation (no pun intended), and some popular approaches. We’ll also include some resources for further reading and experimentation.
The loss function is the bread and butter of modern Machine Learning; it takes your algorithm from theoretical to practical and transforms neural networks from glorified matrix multiplication into Deep Learning.
This post will explain the role of loss functions and how they work, while surveying a few of the most popular of the past decade.
Developers are the heart of Algorithmia’s marketplace: every day, you create and share amazing algorithms, build upon and remix each others’ work, and provide critical feedback which helps us to improve as a service. Thanks to you, we have over 5000 algorithms and 60,000 individuals working together on the Algorithmia platform — making AI accessible to any developer, anywhere, anytime.
We owe you a huge debt, and try to give back a little with programs such as our free-forever promise which delivers 5k credits monthly to every user. But it’s also important to publicly recognize individuals who contribute to the ecosystem, so today we’re shining a spotlight on Daniël Heres.
Daniël’s algorithms are focused mostly on image and language processing, with a special focus on programming languages. He’s developed models for classifying which programming language any source code is written in, predicting the next line in a sequence of Python code, and printing the first 100 numbers of FizzBuzz using AI. We caught up with Daniël to learn a bit more about his background and interests.
Two of the most interesting things potentially ever are happening in our lifetime: the rise of machine learning and the blockchain revolution.
Machine Learning (ML) systems have been able to surpass humans in many problem domains. These systems are now better at lip reading, speech recognition, location tagging, playing Go, image classification, and more.
With the invention of the blockchain and bitcoin, we’ve seen a wave of new cryptocurrencies and distributed applications built on these new blockchains.
The DanKu protocol is an overlap between the blockchain and Machine Learning. It helps facilitate exchanging ML models on the Ethereum blockchain. We even published a whitepaper about it here. You can read more about the DanKu protocol in our previous blog post.
A Microservice architecture can beef up your team’s speed by adjusting how they design and ship code, and developers and business leaders can get ahead by implementing it inside their teams. The 1-2 punch of serverless and microservices combined is driving totally new types of applications and frameworks.
So what are microservices all about? The concept is based on a pretty simple idea: it sometimes makes sense to develop your applications as a lot of very small interlocking pieces instead of one giant whole. These components are developed and maintained separately from each other, so updates don’t require re-doing the entire codebase. Along with a few other design requirements, that’s the basic idea of Microservices.