Why AI companies want to read your emotions

A humanoid robot and robot dogs are displayed at China Mobile booth during WAIC in Shanghai
A humanoid robot and robot dogs are displayed at a China Mobile booth during the World Artificial Intelligence Conference in Shanghai, China July 26, 2025. REUTERS/Go Nakamura
Source: REUTERS

You might have heard about “emotion-recognition AI” or “affective computing”, the idea that a system could analyse your face, voice, or body language and infer what you’re feeling.

It sounds futuristic, but in fact, major tech firms and startups are already investing heavily in this space. So the question is: Why? And should we be worried?

On the surface, emotion-AI promises to make machines feel more human or at least respond in a more human-friendly way. Imagine a call centre where the system detects you’re frustrated and routes you to a human rep faster, or an online ad that senses you’re bored and serves something more engaging. That’s the appeal.

A report described how rail operators in the UK piloted AI cameras that attempted to read passengers’ emotions to improve satisfaction and retail revenue. From a commercial perspective, if a system can tell how you feel, then in theory it can tailor a response, product, or ad more effectively, which means more engagement, more time spent, and more money.

Emotion-AI isn’t just for advertising. Firms are exploring uses in healthcare (detecting pain or stress), education (measuring student engagement), automotive (monitoring driver alertness), and human resources (trying to screen for emotional “fit”). A recent review lays out this broad horizon of possibilities.

For example, in customer-service settings, emotion-AI promises to reduce wait times and tailor responses. But the same technology ends up raising major concerns, biased readings, privacy threats, and opaque decision-making.

Some emotion-AI systems misinterpret black faces as angrier, even when both black and white people are smiling. The accuracy varies widely depending on lighting, camera angle, cultural expression, and individual differences, meaning the risk of error is high. This matters especially when the tech is used in hiring, law enforcement or critical decision-making. Then there’s privacy. Collecting data on your emotional state — what you’re feeling, how you react- is deeply personal. In many cases, people don’t know it’s being collected or how it will be used.

The promise of emotionally aware machines is exciting, from better customer service to smarter health tools. But the risks are real: misinterpretation, bias, surveillance and manipulation.

The implications for autonomy, consent and human dignity are huge. For now, what matters most isn’t just whether the machine can read your emotions’s who controls those readings, how they’re used, and whether you have a say in it. As emotion-AI moves from sci-fi to reality, the need for open discussion, strong regulation and informed consent has never been more urgent.

This story is written and edited by the Global South World team, you can contact us here.

You may be interested in

/
/
/
/
/
/
/