
Image by Dr. Frank Gaeth, from Wikimedia Commons
Swedish PM Criticized For Using ChatGPT In Government Decisions
Swedish Prime Minister Ulf Kristersson faced criticism after he admitted using ChatGPT to generate ideas for government decisions.
In a rush? Here are the quick facts:
- Swedish PM admits using ChatGPT for political decision-making.
- His spokesperson claims no sensitive data is shared with AI tools.
- Critics say AI use in government is dangerous and undemocratic.
Swedish Prime Minister Ulf Kristersson faces increasing public backlash after he revealed his practice of using ChatGPT and LeChat to assist his official decision-making process.
“I use it myself quite often. If for nothing else than for a second opinion,” Kristerssonsaid as reported by The Guardian. “What have others done? And should we think the complete opposite? Those types of questions.”
His statement sparked backlash, with Aftonbladet accusing him of falling for “the oligarchs’ AI psychosis,” as reported by The Guardian. Critics argue that relying on AI for political judgment is both reckless and undemocratic.
“We must demand that reliability can be guaranteed. We didn’t vote for ChatGPT,” said Virginia Dignum, professor of responsible AI at Umeå University.
Kristersson’s spokesperson, Tom Samuelsson, downplayed the controversy, saying: “Naturally it is not security sensitive information that ends up there. It is used more as a ballpark,” as reported by The Guardian.
But tech experts say the risks go beyond data sensitivity. Karlstad University professor Simone Fischer-Hübner advises against using ChatGPT and similar tools for official work tasks, as noted by The Guardian.
AI researcher David Bau has warned that AI models can be manipulated. “They showed a way for people to sneak their own hidden agendas into training data that would be very hard to detect.” Research shows a 95% success rate in misleading AI systems using memory injection or “Rules File Backdoor” attacks, raising fears about invisible interference in political decision-making.
Further risks come from AI’s potential to erode democracy. A recent study warns that AI systems in law enforcement concentrate power, reduce oversight, and may promote authoritarianism.