
Image by Denise Chan, from Unsplash
Man Poisoned Himself After Following ChatGPT’s Advice
A 60-year-old man gave himself a rare 19th-century psychiatric illness after following advice from ChatGPT
In a rush? Here are the quick facts:
- He replaced table salt with toxic sodium bromide for three months.
- Hospitalized with hallucinations, paranoia, and electrolyte imbalances due to poisoning.
- ChatGPT suggested bromide as a chloride substitute without health warnings.
In a case study published in the Annals of Internal Medicine reveals, it is the case of man suffering from bromism, a condition caused by poisoning from sodium bromide.
Apparently this was caused by his attempt to replace table salt with a dangerous chemical which he was suggested to use by ChatGPY. The man had reportedly arrived at the emergency room experiencing paranoia, auditory and visual hallucinations, and accused his neighbor of poisoning him.
The following medical tests revealed abnormal chloride levels, as well as other indicators confirming bromide poisoning. The man revealed that he had followed a restrictive diet, and used sodium bromide replacing all salt. He did so after asking ChatGPT how to eliminate chloride from his diet.
“For 3 months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT,” the study reads. The researchers explain that sodium bromide is typically used as a dog anticonvulsant, or a pool cleaner, but is toxic to humans in large amounts.
The man spent three weeks in hospital where his symptoms gradually improved with treatment.
The study highlights how AI tools may provide incomplete, and dangerous guidance to users. In a test the researchers asked ChatGPT to suggest chloride alternatives, and as a result, received sodium bromide as a response. This response lacked any warning about its toxic nature or request for the context of the question context.
The research warns how AI can also spread misinformation and lacks the critical judgment of a healthcare professional.
404Media notes how OpenAI recently announced improvements in ChatGPT 5 aiming to provide safer, more accurate health information. This case shows the importance of cautious AI use and consulting qualified medical experts for health decisions.