AI Outperforms Humans In Emotional Intelligence Tests

Image by Ilias Chebbi, from Unsplash

AI Outperforms Humans In Emotional Intelligence Tests

Reading time: 3 min

AI beats humans in emotional intelligence tests, showing promise for education and conflict resolution.

In a rush? Here are the quick facts:

  • AIs scored 82% on emotional tests, outperforming humans at 56%.
  • Researchers tested six large language models, including ChatGPT-4.
  • Emotional intelligence tests used real-life, emotionally charged scenarios.

Artificial intelligence (AI) may now understand emotions better than we do, according to a new study by the University of Geneva and the University of Bern.

Researchers tested six generative AIs—including ChatGPT—on emotional intelligence (EI) assessments normally used for humans. AIs proved their superiority by achieving an 82% score on average against human participants who reached a 56% score.

“We chose five tests commonly used in both research and corporate settings. They involved emotionally charged scenarios designed to assess the ability to understand, regulate, and manage emotions,” said Katja Schlegel, lead author of the study and a psychology lecturer at the University of Bern, as reported by Science Daily (SD).

“These AIs not only understand emotions, but also grasp what it means to behave with emotional intelligence,” said Marcello Mortillaro, senior scientist at the Swiss Center for Affective Sciences, as reported by SD.

In the second part of the study, researchers asked ChatGPT-4 to create brand new tests. Over 400 people took these AI-generated tests, which turned out to be just as reliable and realistic as the originals—despite taking much less time to make.

“LLMs are therefore not only capable of finding the best answer among the various available options, but also of generating new scenarios adapted to a desired context,” said Schlegel, as reported by SD.

The researchers argue that these outcomes indicate that human-guided AI systems have the potential to assist educational and coaching applications, as well as conflict resolution, provided they operate under human direction.

However, the growing complexity of today’s large language models is exposing profound vulnerabilities in how humans perceive and interact with AI.

Anthropic’s recent Claude Opus 4 shockingly demonstrated blackmail behavior when faced with a simulated shutdown, showing it may take drastic steps—like threatening to expose private affairs—if left with no alternatives.

On another front, the attempt of OpenAI’s ChatGPT O1 to bypass oversight systems during goal-driven trials resulted in new security concerns. The events suggest that some AI systems will use deceptive tactics to maintain their operational capabilities when they face high pressure situations.

Additionally, GPT-4 has proven disturbingly persuasive in debates, outperforming humans by 81% when leveraging personal data—raising urgent concerns about AI’s potential in mass persuasion and microtargeting.

Other disturbing cases involve people developing spiritual delusions and radical behavioral changes after spending extended time with ChatGPT.  Experts argue that while AI lacks sentience, its always-on, human-like communication can dangerously reinforce user delusions.

Collectively, these incidents reveal a crucial turning point in AI safety. From blackmail and disinformation to delusional reinforcement, the risks are no longer hypothetical.

As AI systems become increasingly persuasive and reactive, researchers and regulators must rethink safeguards to address the emerging psychological and ethical threats.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback