
Image by Freepik
Judge Fines Lawyers For Using Fake AI-Generated Legal Research
A U.S. judge has sharply criticized two law firms for including fake legal information generated by AI in a court filing, calling it a major lapse in legal responsibility.
In a rush? Here are the quick facts:
- Judge fined two law firms $31,000 for fake AI-generated legal citations.
- False information was found in a court brief filed in a State Farm case.
- At least two cited legal cases were completely fabricated by AI.
Judge Michael Wilner, based in California, fined the firms $31,000 after discovering the brief was filled with “false, inaccurate, and misleading legal citations and quotations,” as first reported by WIRED.
“No reasonably competent attorney should out-source research and writing’’ to AI, Wilner wrote in his ruling, warning that he was close to including the fake cases in a judicial order.
“I read their brief, was persuaded (or at least intrigued) by the authorities that they cited, and looked up the decisions to learn more about them – only to find that they didn’t exist,” he said, as reported by WIRED. “That’s scary,” he added.
The situation arose during a civil lawsuit against State Farm. One lawyer used AI tools to draft a legal outline. That document, containing fake research, was handed to the larger law firm K&L Gates, which added it to an official filing.
“No attorney or staff member at either firm apparently cite-checked or otherwise reviewed that research before filing the brief,” Wilner noted, as reported by WIRED.
After discovering that at least two of the cited cases were completely made up, Judge Wilner asked K&L Gates for clarification. When they submitted a new version, it turned out to include even more fake citations. The judge demanded an explanation, which revealed sworn statements admitting to the use of AI tools, as reported by WIRED..
Wilner concluded: “The initial, undisclosed use of AI products to generate the first draft of the brief was flat-out wrong […] And sending that material to other lawyers without disclosing its sketchy AI origins realistically put those professionals in harm’s way,”as reported by WIRED.
This is not the first time AI has caused trouble in courtrooms. Indeed, two Wyoming lawyers recently admitted using fake AI-generated cases in a court filing for a lawsuit against Walmart. A federal judge threatened to sanction them as well.
In this scenario, AI “hallucinations” — made-up information generated AI tools — are becoming a growing concern in the legal system.