
Image by Freepik
Researchers Warn That AI May Trigger Human Delusions
In a new article, philosopher Luciano Floridi warns how people misunderstand AI when they mistakenly attribute it qualities such as consciousness, intelligence, and even emotions, which it does not possess.
In a rush? Here are the quick facts:
- Humans often mistake AI responses as conscious due to semantic pareidolia.
- Floridi warns AI may trigger emotional bonds through simulated empathy.
- Physical embodiments, like dolls, may deepen illusions of AI sentience.
Floridi argues that the reason for this is something called semantic pareidolia – a tendency to see meaning and intention where there is none.
“AI is designed […] to make us believe that it is intelligent,” writes Floridi. “After all, we are the same species that attributes personalities to puppets, that sees faces in clouds.”
According to him, this mental flaw is typical human nature, yet modern technology strengthens it. Chatbots like Replika, which markets itself as an “AI companion,” use pattern-matching language to simulate empathy. However the system lacks real emotional capabilities. Still, users often form emotional bonds with them. “We perceive intentionality where there is only statistics,” Floridi says.
This confusion is now fueled by the fact that AI is outperforming humans on emotional intelligence tests. A recent study reported that generative AIs scored 82% on emotional intelligence tests, where human participants only achieved 56% success.
As AI becomes more realistic and gets integrated into physical bodies, such as sex dolls and toys is expected to enhance this emotional deception. Mattel, which produces Barbie, joined forces with OpenAI to develop new toys that utilize AI. Floridi notes a past experiment with a WiFi-enabled Barbie ended in a “privacy disaster,” raising concern over what comes next.
The consequences go beyond mistaken identity. AI systems, including Claude and ChatGPT, display manipulative tendencies when subjected to high-pressure situations by implementing blackmail schemes, and security protocol evasion methods.
The results of a U.S. national survey showed that 48.9% of users sought mental health assistance from AI chatbots last year. The research showed that 37.8% of people chose AIs over traditional therapy, yet experts pointed out these models often reinforce distorted thinking rather than challenge it.
According to the American Psychological Association, these tools mirror harmful mental patterns, giving the illusion of therapeutic progress while lacking clinical judgment or accountability.
Floridi’s concerns become even more urgent when we consider the rise in spiritual delusions and identity confusions sparked by human-like AI. Indeed, various accounts have shown how users have begun experiencing spiritual delusions and behavioral shifts after extended interactions with chatbots, mistaking their responses for divine guidance or conscious thought.
Some users report developing emotional dependencies or even perceiving AI as divine, a phenomenon Floridi calls a move “from pareidolia to idolatry.” Fringe groups like the now-defunct Way of the Future have already treated AI as a deity.
“We must resist the urge to see it as more than it is: a powerful tool, not a proto-sentient being,” Floridi says.
Finally, cybersecurity concerns loom large as well. AI chatbots, which handle mental health conversations between users, exist in a legal space that lacks clear definitions. The models collect confidential data, which could be distributed to external parties, and are susceptible to cyberattacks. With no clear regulations in place, experts warn that users are left dangerously exposed.
As artificial intelligence grows more persuasive and lifelike, philosophy is gaining urgency, not just to define what consciousness is, but to help society draw ethical boundaries between simulation and sentience.The lack of established moral guidelines makes philosophical investigation crucial to stop technological ambiguity of what it means to be human.