Anyone who is interested in deep learning has likely gotten their hands dirty at some point playing around with Tensorflow, Google’s open source deep learning framework. Tensorflow has a lot of benefits like wide-scale adoption, deployment on mobile, and support for distributed computing, but it also has a somewhat challenging learning curve, and is difficult to debug. It also doesn’t support variable input lengths and shapes due to its static graph architecture unless you use external packages. PyTorch is a new deep learning framework that solves a lot of those problems.
PyTorch is only in beta, but users are rapidly adopting this modular deep learning framework. PyTorch supports tensor computation and dynamic computation graphs that allow you to change how the network behaves on the fly unlike static graphs that are used in frameworks such as Tensorflow. PyTorch also offers modularity, which enhances the ability to debug or see within the network. For many, PyTorch is more intuitive to learn than Tensorflow.
This talk will objectively look at PyTorch and why it might be the best fit for your deep learning use case. We’ll look at use cases that will showcase why you might want consider using Tensorflow instead.
User experience and customer support are integral to every company’s success. But it’s not easy to understand what users are thinking or how they are feeling, even when you read every single user message that comes in through feedback forms or customer support software. With Natural Language Processing and Machine Learning techniques it becomes somewhat easier to understand trends in user sentiment, main topics discussed, and detect anomalies in user message data.
A couple of weeks ago, we gave a talk about investigating user experience with natural language analysis at Sentiment Symposium and thought we’d share the talk, along with the speaker notes for anyone who is interested.
We only understand a sliver of how the brain works, but we do know that it often learns through trial and error. We’re rewarded when we do good things and punished when we do the wrong ones; that’s how we figure out how to live. Reinforcement Learning puts computational power behind that exact process and lets us model it with software.
Our CEO Diego spoke last week at Collision down in New Orleans. The panel with NODE‘s Falon Fatemi, Portworx’s Murli Thirumale and Tom Komkov covered topics ranging from the pressures inherent in growth financing to what the panelists would do differently over the past 12 months. Share and let us know what you think!
Source: Deep Ideas
If you remember anything from Calculus (not a trivial feat), it might have something to do with optimization. Finding the best numerical solution to a given problem is an important part of many branches in mathematics, and Machine Learning is no exception. Optimizers, combined with their cousin the Loss Function, are the key pieces that enable Machine Learning to work for your data.
This post will walk you through the optimization process in Machine Learning, how loss functions fit into the equation (no pun intended), and some popular approaches. We’ll also include some resources for further reading and experimentation.