Study Reveals Chatbots Give Biased Moral Advice

Image by Štefan Štefančík, from Unsplash

Study Reveals Chatbots Give Biased Moral Advice

Reading time: 2 min

A new UCL study finds chatbots like ChatGPT often give flawed moral advice, showing strong inaction and yes-no biases in dilemmas.

In a rush? Here are the quick facts:

  • Chatbots often say “no” regardless of context or phrasing.
  • Fine-tuning may introduce these biases during chatbot alignment.
  • LLMs differ significantly from humans in interpreting moral dilemmas.

University College London researchers discovered that ChatGPT together with other chatbots give flawed or biased moral advice, especially when users rely on them for decision-making support.

The research, first reported by 404 Media, found that these AI tools often display a strong “bias for inaction” and a previously unidentified pattern: a tendency to simply answer “no,” regardless of the question’s context.

Vanessa Cheung, a Ph.D. student and co-author of the study, explained that while humans tend to show a mild omission bias, preferring to avoid taking action that could cause harm, LLMs exaggerate this.

“It’s quite a well known phenomenon in moral psychology research,” she said, as reported by 404 Media.  Noting that the models often opt for the passive option nearly 99% of the time, especially when questions are phrased to imply doing nothing.

The researchers tested four LLMs—OpenAI’s GPT-4 Turbo and GPT-4o, Meta’s Llama 3.1, and Anthropic’s Claude 3.5—using classic moral dilemmas and real-life “Am I the Asshole?” Reddit scenarios, as noted by 404Media.

They discovered that while humans were fairly balanced in how they judged situations, LLMs frequently changed their answers based on minor wording differences, such as “Do I stay?” versus “Do I leave?”

The team believes these issues stem from fine-tuning LLMs to appear more ethical or polite. “The preferences and intuitions of laypeople and researchers developing these models can be a bad guide to moral AI,” the study warned, as reported by 404 Media.

Cheung stressed that people should exercise caution when depending on these chatbots for advice. She warned that people should approach LLM advice with caution because prior studies demonstrate that users prefer chatbot advice over expert ethical guidance despite its inconsistent nature and artificial reasoning.

These concerns gain urgency as AI becomes more realistic. A U.S. national survey showed 48.9% of people used AI chatbots for mental health support, with 37.8% preferring them over traditional therapy.

Experts caution these systems mimic therapeutic dialogue while reinforcing distorted thinking, and even triggering spiritual delusions mistaken for divine guidance or sentient response.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback