FDA’s Elsa Tool Faces Criticism For Hallucinating Scientific Data

Image by Myriam Zilles, from Unsplash

FDA’s Elsa Tool Faces Criticism For Hallucinating Scientific Data

Reading time: 3 min

The FDA’s new AI tool Elsa promises faster drug approvals, but medical experts warn this tool generates fabricated research, which produces additional safety risks.

In a rush? Here are the quick facts:

  • The FDA launched an AI tool named Elsa to aid drug approvals.
  • Elsa sometimes invents studies or misstates existing research.
  • Staff say Elsa wastes time due to fact-checking and hallucinations.

In June, the FDA launched Elsa as their new artificial intelligence tool to accelerate drug approval procedures. FDA Commissioner Dr. Marty Makary declared the system would be completed before schedule while remaining under budget.

However, the FDA staff members recently told CNN that Elsa requires further development before it can be used in practical applications.

Elsa is supposed to help FDA scientists by data summary work and review process optimization. However, CNN notes that current and former FDA employees report that Elsa hallucinates and generates false information. Indeed, the tool seems to fabricate new studies or distort existing ones, which makes it risky to use in serious scientific work.

“Anything that you don’t have time to double-check is unreliable. It hallucinates confidently,” said one FDA employee to CNN. Another added, “AI is supposed to save our time, but I guarantee you that I waste a lot of extra time just due to the heightened vigilance that I have to have.”

CNN notes that currently, Elsa isn’t used for drug or device reviews because it can’t access important documents like company submissions. The FDA’s head of AI, Jeremy Walsh, acknowledged the issue: “Elsa is no different from lots of [large language models] and generative AI […] They could potentially hallucinate,” as reported by CNN

FDA officials say Elsa is mostly being used for organizing tasks, like summarizing meeting notes. It has a simple interface that invites users to “Ask Elsa anything.”

Staff are not required to use the tool. “They don’t have to use Elsa if they don’t find it to have value,” said Makary to CNN.

Still, with no federal regulations in place for AI in medicine, experts warn it’s a risky path. “It’s really kind of the Wild West right now,” said Dr. Jonathan Chen of Stanford University, to CNN.

As adoption of AI in science is growing rapidly, with over half of researchers saying AI already outperforms humans in tasks like summarizing and plagiarism checks.

However, significant challenges remain. A survey of 5,000 researchers found 81% worry about AI’s accuracy, bias, and privacy risks. Many see the lack of guidance and training as a major barrier to safe AI use.

Experts emphasize the urgent need for clearer AI ethics and education to avoid misuse. While AI shows promise, researchers agree that human oversight is still crucial to maintain scientific integrity.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback