Scientists Train AI To Think Like A Human Using Psychology Studies

Image by Freepik

Scientists Train AI To Think Like A Human Using Psychology Studies

Reading time: 2 min

The new AI system, Centaur, demonstrates human-like thinking in multiple experiments, which produces new findings yet sparking debate on what true understanding means.

In a rush? Here are the quick facts:

  • It learned from 160 studies and 10 million responses.
  • Centaur generalizes strategies like humans in new situations.
  • Some experts say it outperforms classical cognitive models.

An international team of scientists has developed a new AI system called Centaur, which performs like a human being in psychological tests.

In their study, the development team used Meta’s open-source LLaMA model to create Centaur, which processed results from 160 studies involving more than 60,000 volunteers. The goal? The researchers wanted to determine if AI systems could duplicate various types of thinking processes.

“Ultimately, we want to understand the human mind as a whole and see how these things are all connected,” said Marcel Binz, lead author of the study, in an interview with The New York Times.

Modern AI, like ChatGPT, can produce responses that seem human, but the system  still makes basic mistakes. A chess bot can’t drive a car, and a chatbot might let pawns move sideways. General intelligence, which functions similarly to human mental processes, continues to be out of reach. The research approach of Centaur advances the field by bringing scientists closer to their objective.

The AI was trained to copy human choices in tasks like steering a spaceship toward treasure or learning patterns in games. “We essentially taught it to mimic the choices that were made by the human participants,” Binz explained to The Times.

Centaur not only learned like a human, it generalized like one, too. When the spaceship task was swapped for a flying carpet version, Centaur reused the same successful strategy, just like people did.

Experts were impressed. “This is really the first model that can do all these types of tasks in a way that’s just like a human subject,” said Stanford’s Russ Poldrack.

Still, some critics say mimicking behavior isn’t the same as understanding the mind. “The goal is not prediction. The goal is understanding,” said Indiana University’s Gary Lupyan, in the interview with the Times.

Even Binz agrees. “Centaur doesn’t really do that yet,” he said. But with five times more data coming, the team hopes Centaur will grow into something even more powerful, and possibly even help unlock the mysteries of the human mind.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback