
Image by Comuzi / © BBC / Better Images of AI / Mirror D / CC-BY 4.0
Companies think there’s a market for getting machines to work out your emotions.
They’re betting that recognising your facial expressions will give insights into your health and happiness, your likes and dislikes, your thoughts and intentions.
The promise of personalisation is their way in – and what could be more personal than your smile?
We’ve seen glasses that look inwards at you, using sensors to monitor facial movements with claims this can discern your emotional state. There are voice interfaces that try to read your emotions as you speak to them and simulate empathy in their reply.
But this is an area ripe with controversy, even claims of pseudoscience, not to mention fundamental flaws in current approaches which mean human emotions cannot be read with accuracy.
Take facial expression recognition – ‘FER’ for short. It can detect a furrowed brow, a scowl, but that’s not the same as anger.
Sure, I might smile to express happiness, but also when I’m nervous. Someone from another culture might express joy differently to me. We express emotion in a variety of ways, as ample research has shown.
So who gets to decide what’s classified as happy or sad? And crucially, what harm might be caused if the system gets it wrong?
These questions are becoming increasingly urgent as FER makes inroads across key industries, from education to recruitment, health to media, and advertising to border security.
For instance, automated screening by police that classifies a person as angry or deceptive could encourage use of force or further detention. Screening that reads a job candidate as uninterested or inattentive could shut down employment chances. Classroom monitoring that identifies a child as lacking focus could stigmatise them for years to come.
And it’s not hard to see how the repurposing of such systems could have serious implications if placed in the hands of discriminatory regimes or turned against political adversaries.
Imagine being pulled from a security queue as you’re about to go on holiday because airport security’s FER tool decides you’re looking unnaturally ‘anxious’ and ‘deceitful’. This is not some distant future; the EU trialled a so-called “smart lie detection system” back in 2019, which its developers claimed identifies “biomarkers of deceit”.
There’s evidence FER has been tested on Uyghur people in Xinjiang, China where police have sought to detect minute changes in facial expressions and skin pores as part of a growing surveillance infrastructure.
All emotion recognition brings serious risks of bias. Cultural differences impact how people display emotion and so does neurodivergence, but existing models struggle with deviation from a narrow set of norms which ends up disproportionately impacting marginalised communities.
There’s also the privacy question.

Yutong Liu & Kingston School of Art / Better Images of AI / Talking to AI 2.0 / CC-BY 4.0
These systems need data in the form of images or videos and that can come from surveillance cameras, cameras placed near advertising screens in stores, social media, and often our own personal devices.
They harvest data in ways that are often opaque to us, the people providing it.
This gets even more complicated as companies move to combine different types of data, compiling text and emojis from social media posts, linking these with eye movements and bodily gestures, or the tone, pitch and pace of recorded voices, and even biometric data like heart rate.
The risk of such rich and personally identifiable data making its way into the wrong hands raises all sorts of safety questions.
Safeguards for society
So, what can be done to safeguard society if companies are pressing ahead despite these issues?
BRAID fellow Dr Benedetta Catanzariti says we need to make sure regulation addresses not just emotion recognition, but specific uses of facial expression recognition.
She studied these technologies in healthcare where machine learning tools have been designed to predict, diagnose, and manage a range of mental health and neurodevelopmental conditions, prevent suicide, and assess pain – all based on changes in facial expression.
Dr Catanzariti came to the conclusion that both patients and clinicians must be able to interrogate the underlying assumptions, methods, and explanations of system aims and outputs.
She argues in her report that it is “essential that meaningful information about the decision-making process be available for both clinician and patient scrutiny, irrespective of data literacy levels and technical skills”.
How else can they question or contest automated decision-making in their lives?
We’ve seen how facial recognition technology has slipped into the mainstream in recent years, despite pushback from activists and civil liberties organisations resisting its rise.
Tactical approaches to empowering citizens were devised, like wearing makeup to obscure certain features of your face or specialised glasses with infrared lights to confuse the cameras. But while this might work for protesters or shoppers in open public spaces, it doesn’t help a patient being evaluated in a clinic or a candidate interviewing for a job.
Ultimately, the power and responsibility to address the many pitfalls, ensure meaningful human agency, and thwart bad actors lies with the companies developing and deploying facial recognition and FER, the private and public services using them, and the institutions regulating them.
They must ask, is using FER necessary and proportional? Is it reliable, accurate and fair?
It seems these systems currently raise more questions than they answer.
Want to know more? Read Dr Benedetta Catanzariti’s report investigating the Impact of Facial Expression Recognition in Healthcare.
Dr Bronwyn Jones
Translational Fellow, BRAID (Bridging Responsible AI Divides)
Research Lead, Responsible Innovation Centre for Public Media Futures, BBC R&D