Look at My Face

Look at My Face
Photo by Nik / Unsplash

Today’s lesson: Emotional Leakage and Facial Recognition Software in Market Research


There are lots of ways that facial recognition technology can be put to use; but one industry that has already demonstrated a lot of interest is the marketing business. Advertisers, market researchers have dabbled in facial recognition both for creative executions, and for understanding consumer response to messaging, advertising, or other branded experiences.

The use of facial recognition technology for understanding consumer response can vary in implementation - I’ll talk about that in a minute. But the theoretical basis for the use of facial recognition is the notion that humans “leak” information about their emotional states through autonomic physical responses, and that one of the ways this information leaks, is through our faces.

The history of this idea goes back at least to Charles Darwin and William James, and was popularized in recent years by a TV show called Lie To Me, based on the life and work of Paul Ekman.

Tim Roth as Paul Ekman

Ekman’s work is focused on the idea that there are seven basic, universal human emotions: happiness, surprise, fear, anger, disgust, sadness and contempt. Not only are these emotions universal, but the way they are expressed - or leaked - are also universal. Traditionally, there have been two ways of testing this hypothesis: asking people to make a face that expresses the emotion described, and asking people to identify the emotion based on pictures or videos of other people making faces. The idea that, with confirmation that emotions are universally expressed, they can then be measured objectively, has been tested with a variety of measurement methods - observation, trained observation (the so-called “facial action coding system”), fast photography, video, electromyography (EMG), and most recently, facial recognition software.

This is an extremely well studied area - lots of testing, lots of experiments, lots of published results, and of course, even a television show starring Tim Roth. Billions of dollars have been spent on training law enforcement to detect emotions through facial expressions, and on training management teams to use facial expressions to determine the emotional compatibility of members of a team. In one telling, Saddam Hussein’s half-brother’s belief he could read the emotions of American negotiators is what led to the first Gulf War. He thought they wouldn’t invade; he was wrong.

But what does the science really say?

Very little, in point of fact. Like most measurements of autonomic response as indices for emotional state, there are weak (but observable!) correlations between facial expressions and generally positive and generally negative emotions. Humans seem to almost universally agree that a smile is a smile.

But as a measure of discrete emotions, these tests tell us nothing. For one, disgust seems to have the same autonomic responses as ‘neutral.’ For another, fear and anger can be switched around depending on the culture. Surprise can be either positive or negative, so it’s likewise hard to identify this emotion based on facial expressions or other autonomic responses alone.

Even in conjunction with other autonomic responses, like heart rate, skin conductivity, fingertip temperature, or EEGs, facial expressions can confirm a generally positive or generally negative response, tell us almost nothing confirmatory about discrete emotions, and can’t distinguish at all between different kinds of pleasant emotions.

And yet, companies like Kairos and Affectiva will tell you that they have data sets sufficiently large to provide high confidence diagnostics of emotional responses to stimuli based on facial recognition.

Here’s the central question: If you’ll concede that science simply does not make the strong claims that you’ve been led to believe it does, do you think this is a problem located in the size of the corpus of training data? Or do you think it’s a problem located in the (quite ancient!) model of human emotions?

Based on what services like Affectiva say in their marketing, it sounds like they believe they’ve overcome the problem of precision and predictability with larger and larger, and more diverse, data sets. They are trying to ‘control’ for cultural differences, by tagging faces with country of origin, for example.

But the problem is not the amount of data. The problem is the underlying model. People simply do not have standard physiological responses to stimuli, even if they have common cultural understandings of the meaning of emotions. We know what it is to be sad, to be angry, to be frightened. We may even understand the physical metaphors (stomach in a knot, blood boiling, chill running down your spine) for emotion without actually experiencing those physiological symptoms.

To some extent, this is because we teach children about emotions - we put up posters in classrooms with facial expressions labeled with emotions; we instruct kids to ‘use their words’ and try to teach them to name their feelings as a way of both expressing them and coping with them when they’re overwhelming. We are literally taught how to think about, and how to talk about, how we feel.

The Inside Out characters jump for joy

But not everyone cries when they are sad, or laughs when they are happy, or screams when they are afraid. Not everyone’s blood pressure goes up or down, nor do their body temperatures fluctuate. And as the science keeps showing us, not everyone leaks the same emotional data from their faces.

So why do marketers in particular want to use facial recognition software in market research? Why would this be useful?

They don’t trust you.

One company that has adapted emotional leakage and facial recognition for its market research purposes is System 1 - formerly BrainJuicer. For one thing, they rely on Ekman’s research and his facial action coding system, based on the seven ‘universal’ emotions. And they rely on what it known as the affective heuristic - in which people are said to make decisions based on their emotions. So if we know what their emotions are, we should be able to predict what kind of decision they will make.

So how does System1 do this? Interestingly enough, they don’t use facial recognition software; they use emojis representing the seven emotions. They ask people to select an emotion that fits the way they seem to be feeling, looking at whatever stimulus or in response to whatever prompt or question. Then they are asked to rate the intensity of that feeling on a scale. And then they are asked an open-ended question to explain their feelings. The steps are: multiple-choice, then Likert scale, then open-end (aka qualitative) data.

I don’t want to tell other people how to do their jobs, but they probably could have skipped the first two steps and just asked people to explain how the question or prompt or stimulus made them feel. All of the responses are self-reported. None of them are “measurable” in a physiological sense - and they’re not even really trying to measure them that way. But they are using the conceptual framework of emotional leakage as a way to get people to express their own emotional experiences.

For some reason, marketers believe that people are not able to talk about their feelings accurately. But this makes little sense when we think about all that work Sesame Street, your kindergarten teacher and your mom did to teach you how to talk about feelings, identify and name them, and even, to some degree, what appropriate affect for feelings should be.

The lie detector muppet says that was a lie.

I’ll continue to write about this (because I’m absolutely obsessed with it), but there is a very strange and pernicious belief among marketers and product designers that people reflexively lie, can’t articulate their emotions, and don’t know why they do the things they do. So they look for ways to detect what people think and feel without having to actually talk to them. They’ll do anything to ‘quantify’ and ‘measure’ human emotions as a driver of behavior and choice - anything except simply ask some humans.

Next time I’ll tell you all about how rarely people actually lie, and we’ll also eventually get to the topic of why sometimes people are great at explaining their choices and behaviors, and other times they’re terrible at it. But for now, know this. If you were hoping that facial recognition software, thermometers, or EMGs were going to finally tell you what people really feel when they see your latest ad for toilet paper, I’m sorry to disappoint you, but the best you’ll get is “had pleasant feelings” or “didn’t have pleasant feelings” or “neutral” (which could also be disgust).

Save your money.

Here’s a few links we think you might like:

  • Your Face is Not Your Own by Kashmir Hill in the New York Times about Clearview AI and how it skirted the law and definitely, you know, ethics to scrape the internet for people’s faces.
  • The Shoddy Science Behind Emotional Recognition Tech by Dave Gershgorn on Medium.
  • And in case you sometimes forget how a lot of ‘data-driven’ study of human behavior seems almost inexorably to tilt towards eugenics, this is a helluva piece about the history of well, exactly that by Frank Pasquale.

As always, we love to hear your feedback, thoughts, and questions, so hit us up.