
Image by Sue Winston, from Unsplash
UK Court Warns Lawyers: Fake AI Citations May Lead To Criminal Charges
The High Court of England and Wales issued a severe warning to lawyers, indicating they must refrain from using false legal information produced by artificial intelligence tools like ChatGPT, because they will face criminal penalties.
In a rush? Here are the quick facts:
- UK court warns lawyers of criminal charges over fake AI-generated citations.
- AI-created case references used in £89 million lawsuit lacked factual basis.
- Lawyer self-reported after submitting 18 false cases mixed with genuine citations.
The warning became necessary after two recent legal proceedings. The man who pursued an £89 million lawsuit against two banks created artificial intelligence-based legal citations, as reported by The New York Times.
Among the 45 case references presented, eighteen completely lacked any factual basis. The Guardian reported that the lawyer provided real citations, yet both the genuine and fake references failed to establish valid connections to the case.
The attorney took responsibility for his oversight of the citations before reporting himself to the regulatory body.
Haringey Law Centre filed a lawsuit against the London borough council because of their housing dispute. The lawyer presented to the court five completely fabricated previous cases. The court determined she was accountable for creating unnecessary legal expenses according to The Guardian.
Though she denied using AI directly, she admitted she may have “carried out searches on Google or Safari” and unknowingly relied on AI-generated summaries, as reported by The Times.
Judge Victoria Sharp, president of the King’s Bench Division, said, “There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused,” as reported by The Times. She warned that lawyers risk criminal charges or being barred from practicing law.
The judgment added that AI tools “can produce apparently coherent and plausible responses… [but] may make confident assertions that are simply untrue,” as reported by The Times. Ian Jeffery from the Law Society backed the ruling, saying it “lays bare the dangers of using AI in legal work,” as reported by The Guardian.
The court called on legal leaders to urgently train staff on how to responsibly use AI, which is known to “hallucinate”information.
The practice of unchecked AI use has already caused similar problems throughout the United States as well as Australia and Europe thus generating widespread concern among legal professionals.
A U.S. judge handed two law firms $31,000 in fines for using AI to create artificial court briefs that included fabricated legal citations. The State Farm-related court document included false judicial references, which the attorneys at both firms neglected to verify.
Judge Michael Wilner expressed his disappointment toward the lawyers for not verifying AI-generated content, which almost resulted in them including deceptive information within a court order.