Algorithmia Blog

Introduction to Emotion Recognition

Source: Frontiers in Psychology

You expect employees to have high levels of emotional intelligence when interacting with customers. Now, thanks to advances in Deep Learning, you’ll soon expect your software to do the same.

Research has shown that over 90% of our communication can be non-verbal, but technology has struggled to keep up, and traditional code is generally bad at understanding our intonations and intentions. But emotion recognition – also called Affective Computing – is becoming accessible to more types of developers. This post will walk through the ins-and-outs of determining emotion from data, and a few ways you can get some emotion recognition and running yourself.

Use Cases: TSA Screening, Audience Engagement, And More

Source: 21st Century Wire

Understanding contextual emotion has widespread consequences for society and business. In the public sphere, governmental organizations could make good use of the ability to detect emotions like guilt, fear, and uncertainty. It’s not hard to imagine the TSA auto-scanning airline passengers for signs of terrorism, and in the process making the world a safer place.

Companies have also been taking advantage of emotion recognition to drive business outcomes. For the upcoming release of Toy Story 5, Disney plans to use facial recognition to judge the emotional responses of the audience. Apple even released a new feature on the iPhone X called Animoji, where you can get a computer simulated emoji to mimic your facial expressions. It’s not so far off to assume they’ll use those capabilities in other applications soon.

This is all actionable information that organizations and businesses can use to understand their customers and create products that people like. But it’s not exactly a piece of cake to get a product like this working in practice. There are two major issues that have held back meaningful progress in Affective Computing: the training / labeling problem, and the feature engineering problem.

The Training and Labeling Problem

As with any Machine Learning problem, your results are only as good as your data – garbage in means garbage out. Affective computing has a data problem, but it runs deeper than just lacking labeled training data – it’s that we’re not quite sure how to label it in the first place.

Creating an algorithm means we need to understand our inputs and outputs – so what exactly are the human emotions? There are two core approaches that inform how solutions can be designed.

Which model of human emotions we accept and work with has important consequences for modeling them with Machine Learning. A categorical model of human emotion would likely lead to creating a classifier, where text or an image would be labeled as happy, sad, angry, or something else. But a dimensional model of emotions is slightly more complex, and our output would need to be on a sliding scale (perhaps a regression problem).

Source: Maria K. Almoite

But even once we pick a model to base emotions on, it’s pretty difficult to get hands on a useful training set. There are only two large-scale sets that are useable for modeling:

The labeling on both of these datasets follows the categorical emotion philosophy and uses the FACS coding system.

In general, unlike many other disciplines with research being done on applying Machine Learning, a lot of the work in Affective Computing is being done on understanding the field first. For example, the research project EmotiNet is a “knowledge base” for emotion recognition in text. Much of the fundamental groundwork in understanding human emotions and codifying them is still yet to be done.

The Feature Engineering Problem

Even once we get over the hurdle of choosing a framework for understanding emotion and acquiring well-labeled training data, there’s still another issue before diving into algorithms: nobody is quite sure what the features should be.

In Machine Learning, we use a dataset as an input to predict and create some sort of output. The dataset has features: think of these as the columns in a spreadsheet. For a normal and simple dataset, features might be “inches of rain today” or “number of engagements for a customer.” But when we’re dealing with Affective Computing, there are only 3 possible inputs – text, speech, and image/video – and none follow the traditional data format.

Feature engineering, or deciding what the best possible inputs for our model are, is also a complex issue in Sentiment Analysis, which is the broad parent topic of emotion recognition. It might help, for example, to include whatever the previous sentence was along with the current sentence as an input. Adding that type of context to each data is what feature engineering or feature extraction is all about. For more detail on feature engineering around sentiment analysis, check out our post about the topic here.


For text, the typical data structure used is a Document-Term Matrix. The DTM is basically a matrix that records how many times each word appears in a “document,” which can be defined as anything we want. If we’re analyzing the emotional content of a sentence, the DTM might be some function of the occurences of each word in the sentence.

The problem with this more traditional data structure is that it doesn’t sync well with our goals – emotion isn’t garnered from individual words. Context, tone, previous words and sentences, and punctuation all dictate how a comment is meant to be perceived. That’s why researchers have been working on new types of data structures to take these factors into account. You can find some interesting datasets to work for text with here.


Speech is often just translated into text and then analyzed, but that wouldn’t be a good fit for emotion recognition. Non-verbal cues dominate how we desire our speech and communication to be perceived, and we want those to be inputted into our model as features. Researchers have been exploring using acoustic features instead of transcription for emotion recognition applications.


Given that both of the major available datasets in Affective Computing are sequences of images and videos, a lot of research on the cutting edge is being done here. Some of the coolest real-time applications of this software will certainly involve the camera on your smartphone. Researchers have been working on understanding how to featurize images and videos, and even getting creative with using data from sites like Flickr and Twitter.

There are certainly interesting challenges to be solved in understanding how to properly engineer features from text, speech, and image/video – but the resurgence of Neural Networks over the past few years has relegated a lot of this conversation to the backlog.

Neural Nets for Emotion Recognition

Source: Semantic Scholar

A Neural Net, a subset of Deep Learning, is a type of algorithm that has become wildly popular over the past couple of years. In addition to its uncanny ability to achieve higher than the formerly state-of-the-art accuracy for many classification tasks, Neural Nets have a critical benefit that’s immensely helpful in emotion recognition: they do feature engineering automatically.

In a Neural Net, we can input the data we want to use (text, speech, etc.) and the data gets passed through different “layers” of the net. Each layer modifies the input values to try and morph it into something useful and predictive in the model. For our purposes, that means that we can input our data as is and tweak the model to output what we need.

Getting even more specific, there are special types of Neutral Nets – called Convolutional Neural Networks (CNNs) – that are very effective for the use of images as inputs. These networks further feature engineer the input images and can help achieve greater accuracy in emotion recognition. One of the cutting edge algorithms in Affective Computing was developed by two professors from The Open University of Israel and uses CNNs. For an implementation using the Algorithmia platform, check out this tutorial.

Unsupervised Emotion Recognition

While most of the work in Affective Computing has been done using labeled datasets and supervised learning, a few research efforts have centered around a less top-down approach – segmenting the data we have automatically and seeing what kinds of emotions result. These methods often also take context and sentence structure into account to reach tighter classifications.

Some also explicitly try to expand beyond the often confining limits of FACS, like this paper released at a conference in 2012. According to the abstract, “The proposed methodology does not depend on any existing manually crafted affect lexicons such as WordNet-Affect, thereby rendering our model flexible enough to classify sentences beyond Ekman’s model of six basic emotions.” Another approach using the dimensional model is proposed here.

Further Reading

Microsoft’s developer team on emotion detection and recognition using text – “Emotion Detection and Recognition from text is a recent field of research that is closely related to Sentiment Analysis. Sentiment Analysis aims to detect positive, neutral, or negative feelings from text, whereas Emotion Analysis aims to detect and recognize types of feelings through the expression of texts, such as anger, disgust, fear, happiness, sadness, and surprise.

Sylvester Kaczmarek’s survey with a focus on Machine Learning – “In our daily life, we go through different situations and develop feeling about it. Emotion is a strong feeling about human’s situation or relation with others. These feelings and express Emotion is expressed as facial expression. The primary emotion levels are of six types namely; Love, Joy, Anger, Sadness, Fear, and Surprise. Human expresses emotion in different ways including facial expression, speech, gestures/actions and written text. This article mainly focuses on two expressions namely; written text and speech.

The FACS and Paul Ekman – “The Facial Action Coding System (FACS) is a tool for measuring facial expressions. It is an anatomical system for describing all observable facial movement. It breaks down facial expressions into individual components of muscle movement. It was first published in 1978 by Ekman and Friesen, and has since undergone revision.

Categorical vs. Dimensional approaches to understanding emotion – “Emotion researchers can be divided into two camps based on their answers to the following question: What is the best way to think about emotions? Some suggest emotions are best thought of as a small number of primary and distinct emotions (anger , joy, anxiety, sadness). Others suggest that emotions are best thought of as broad dimensions of experience (e.g., a dimension ranging from pleasant to unpleasant).

Whether Categorical and Dimensional approaches can work together – “The results show that the happiness–fear continuum was divided into two clusters based on valence, even when using the dimensional strategy. Moreover, the faces were arrayed in order of the physical changes within each cluster.


Feature Extraction and Selection for Emotion Recognition from EEG – “Advanced feature extraction techniques are found to have advantages over commonly used spectral power bands. Results also suggest preference to locations over parietal and centro-parietal lobes.

Emotion Recognition from Text Using Semantic Labels and Separable Mixture Models – “This study presents a novel approach to automatic emotion recognition from text. According to the results of the experiments, given the domain corpus, the proposed approach is promising, and easily ported into other domains.

Emotion Detection and Sentiment Analysis of Images – “If we search for a tag “love” on Flickr, we get a wide variety of images: roses, a mother holding her baby, images with hearts, etc. These images are very different from one another and yet depict the same emotion of “love” in them. In this project, we explore the possibility of using deep learning to predict the emotion depicted by an image. Our results look promising and indicate that neural nets are indeed capable of learning the emotion essayed by an image.

Emotion Recognition in the Wild via Convolutional Neural Networks and Mapped Binary Patterns – “We present a novel method for classifying emotions from static facial images. Our approach leverages on the recent success of Convolutional Neural Networks (CNN) on face recognition problems.  Our method was tested on the Emotion Recognition in the Wild Challenge (EmotiW 2015), Static Facial Expression Recognition sub-challenge (SFEW) and shown to provide a substantial, 15.36% improvement over baseline results (40% gain in performance).