Ad
The problem runs so deep that the act cannot even agree on what emotion recognition actually means (Photo: Wikimedia)

Opinion

These are the major loopholes on emotion-recognition in EU Artificial Intelligence Act

Free Article

Whether in the form of an insidious tool of surveillance capitalism or a fear-inducing instrument of techno-authoritarianism, AI-powered emotion recognition systems are creeping into our lives with silent yet steady steps.

Combining affective computing with AI technology, these systems can detect, infer from, and interact with people’s emotions by analysing their facial expressions, physiological measurements, voice, gestures, or words.

You may now encounter them in many areas of everyday life, from healthcare and education to workplaces and law enforcement. 

Despite their wide availability, they are inherently controversial in various dimensions. In addition to their questionable accuracy and tendency for bias, AI-powered emotion recognition systems pose serious ethical and legal challenges that transcend beyond general privacy concerns of identity disclosure and validity of consent.

Mind-invading

Intruding into the personal space of the mind through emotions, these AI systems leave individuals vulnerable to manipulation in their thought process and decision-making, and push the boundaries of our privacy and autonomy. 

When it comes to the regulation of emotion recognition systems, the European Union’s landmark Artificial Intelligence Act is, to say the least confused.

While the technical limitations and potential human rights implications of emotion recognition technology are acknowledged in recitals, the act fails to provide a reliable regulatory framework to address those challenges, which opens the door for unfettered use of emotion recognition in the EU. 

At first glance, the act appears to make an ambitious start with an outright ban on AI systems that infer emotions in the areas of workplace and education under Article 5(1)(f).

The problem runs so deep that the act cannot even agree on what emotion recognition actually means

On a closer look, the provision leaves out many high-impact areas, such as border management, healthcare, and public services, in some of which emotion recognition systems are already being used.

This, however, is only one of the many problematic aspects of the EU AI Act where emotion recognition systems are concerned. The problem runs so deep that the act cannot even agree on what emotion recognition actually means.

Article 3(3) of the AI Act defines an emotion recognition system as one that identifies or infers emotions or intentions based on biometric data.

The scope of this definition is wider than that referred to under Article 5(1)(f), which only encompasses AI systems that infer emotions.

The Commission’s guidelines on prohibited artificial intelligence practices confirm this distinction of system capabilities, stating that the prohibition under Article 5(1)(f) does not refer to “emotion recognition systems,” but only to AI systems that “infer emotions of a natural person.”

That’s where things get interesting.

After distinguishing between identification and inference of emotions, the same guidelines state that the definition of emotion recognition systems under Article 3(39) and Article 5(1)(f) should be considered in relation to each other.

This implies that the prohibition should also apply to those that only identify emotions.

'Pain' or 'fatigue' don't qualify

To add another layer of confusion, Recital 44 refers to AI systems prohibited under Article 5(1)(f) as those that ‘detect’ emotional states, whereas Recital 18 explicitly says that the mere detection of expressions like pain or fatigue does not qualify as emotion recognition at all. 

This terminological inconsistency may lead to more severe consequences than might be initially thought if it is exploited to circumvent regulatory measures set down for emotion recognition systems. The fact that the line between what is considered identification, inference, or detection is not clear-cut exacerbates this risk by enabling providers and deployers to claim that their AI system is not one that infers emotions but one that only identifies them.

To complicate matters further, the commission’s guidelines add a data-related limitation, suggesting that the prohibition under Art 5(1)(f) should only apply to systems that process biometric data.

This results in the exclusion of many emotion recognition systems from the prohibition, including those performing emotion recognition through text-based sentiment analysis extensively used in AI chatbots, mental health apps, customer service tools, and such.

Considering the growing mainstream use of AI chatbots as emotional support companions and its grave ramifications — from emotional overdependence to mental health problems and even suicide — this constitutes a huge regulatory gap whose repercussions can already be anticipated.

When not explicitly prohibited, emotion-recognition systems are considered high-risk under Article 6(2) and Annex III 1(c) of the AI Act.

But again, only those using biometric data fall within that category.

This creates a ‘blind spot,’ in which some AI systems are regarded as neither prohibited nor high-risk despite being a serious threat to fundamental rights and democracy, unless they are qualified as high-risk or considered prohibited practice for another reason.

Otherwise, they would be only subject to the minimum requirements foreseen for all AI systems and, if applicable, to the measures prescribed for general purpose AI models and / or transparency obligations for certain AI systems under Article 50, none of which would be sufficient given their potentially drastic implications.

That said, even when an emotion-recognition system falls under Annex III, Article 6(3) contains a ‘loophole,’ through which it is possible to avoid the high-risk classification and thereby the associated requirements.

Under Article 6(3), an AI system referred to in Annex III is not deemed high-risk “where it does not pose a significant risk of harm to the health, safety or fundamental rights.”

This exemption directly applies to those performing “narrow procedural tasks” and “preparatory tasks” for human decision-making.

Despite their strong influence on human decision-making — particularly given the prevalence of automation bias —  emotion recognition systems rarely make decisions on their own. That makes it possible to label such systems as “preparatory” and neatly avoid the obligations stipulated for high-risk AI systems under the AI Act. 

This tangled web of legal uncertainties is more than a technical flaw. It is deeply political and runs the risk of creating a regulatory environment in the EU where emotion recognition thrives without accountability, jeopardizing privacy, autonomy, freedom of expression, and even democratic power and trust of European citizens.

In today’s world where emotion recognition is increasingly being blended into everyday life, immediate proactive action is imperative to set out a clear and comprehensive regulatory path before the damage is too great to remedy.

Once AI-powered emotion recognition systems are widely embedded in classrooms, workplaces, and digital platforms, it will be too late to take regulatory action, with emotional surveillance being irreversibly normalised, and many already harmed in the process.


Every month, hundreds of thousands of people read the journalism and opinion published by EUobserver. With your support, millions of others will as well.

If you're not already, become a supporting member today.

Ad
Ad