Three Years Ago, This Was a Punchline. Now It's a Suspension.
If Sullivan & Cromwell can't catch AI hallucinations in a Chapter 15 filing, your firm needs a verification system that assumes you can't either.
The Verification Gap
TL;DR: A Georgia prosecutor just lost the right to practice before her state’s Supreme Court because she didn’t verify the AI-generated authorities in her brief. About 30 of them were either invented, attributed to the wrong cases, or quoted text that wasn’t there. The day before, the California State Bar wrapped public comment on a rule package that would make “independently review and verify” any AI output an enforceable duty under the Rules of Professional Conduct. Verification is moving from best practice to baseline competence, and most attorneys still don’t have a system for it.
A six-month suspension for not checking the cites
On May 5, 2026, the Supreme Court of Georgia suspended Clayton County Assistant District Attorney Deborah Leslie from practicing before the court for six months. To get reinstated, she has to complete 12 hours of continuing legal education focused on AI in legal practice.
The story is uncomfortable to read. Leslie was handling the appeal of Hannah Payne, who is serving a life sentence for the 2019 fatal shooting of Kenneth Herring. During oral argument, the justices flagged nine citations in the state’s brief that either didn’t exist or didn’t say what Leslie claimed they said. The court asked her to explain.
She did. And the explanation made it worse. She acknowledged that she had used AI software to draft her reply briefs and the trial court’s order denying Payne’s motion for a new trial. After further review, the count rose. Twelve more cases she had cited at the trial level were also AI-generated and unverified. Then she withdrew nine more from her appellate brief that misstated the holdings or didn’t correspond to real Georgia or federal precedent.
That’s roughly 30 problematic authorities across multiple filings on a murder appeal. And here’s the part worth slowing down for: the AI didn’t just make things up. It also cited real cases for propositions they didn’t support, and quoted language that wasn’t in the actual opinions. So “verification” isn’t just “does this case exist?” It’s also “does it say what I’m claiming it says?” and “is this quote actually in the opinion?” The Georgia opinion lays all three failure modes side by side, and the court treats all three as the same problem.
Justice Benjamin Land’s eight-page opinion is the kind of language that ends careers: “These filings, as well as the trial court’s order, contain multiple case citations which either do not exist, or which exist but do not support the propositions of law for which they are cited. While we have no rule against the responsible use of artificial intelligence software by attorneys, citing cases that do not exist or do not support the proposition for which they are cited is a violation of this Court’s rules and falls far beneath the conduct we expect from Georgia lawyers.”
This is the first time I’m aware of that an AI-citation problem has cost an attorney the right to practice in a particular court. Not a fine. Not a public scolding. A suspension.
This didn’t come out of nowhere
Three years ago, Mata v. Avianca was the punchline. Two New York lawyers and their firm filed a brief with ChatGPT-generated cases, got hit with $15,000 in combined sanctions, and the legal world spent six months making jokes about the lawyer who didn’t know ChatGPT could lie.
The arc since then has not been funny.
I’ve been keeping a running list of these cases since Mata happened. The list is never the same length the next time I look at it. The pattern in 2023 was “lawyer didn’t realize AI could fabricate.” The pattern now is “lawyer knew AI could fabricate and didn’t check anyway.” The second version is much harder to defend in front of a judge.
In the first quarter of 2026 alone, U.S. courts imposed at least $145,000 in sanctions for AI-fabricated citations, according to research compiled by ComplexDiscovery. Most of that came from two clusters: $109,700 in Oregon, where the state appellate court built a per-citation fee schedule (think $500 per fake case, $1,000 per fake quote), and a $30,000 fine from the Sixth Circuit on two attorneys in a consolidated appeal.
The Sixth Circuit case matters because federal appellate courts had mostly been quieter on this. Not anymore. The court chose the elevated penalty because, in its words, smaller fines hadn’t been working.
Then in April, Sullivan & Cromwell, OpenAI’s outside counsel and self-styled expert on the safe and ethical deployment of AI, sent an emergency letter to a federal bankruptcy judge in Manhattan asking him not to sanction them. Their motion in a Chapter 15 case had roughly 40 errors, including citations to cases that don’t exist. The firm has policies. Required training. Office Manual language saying “trust nothing and verify everything.” The reviewer just didn’t catch it.
If S&C can’t catch this, your firm needs to assume you can’t either.
Ten days later, a federal magistrate judge in North Carolina publicly reprimanded Rudy Renfer, a former Assistant U.S. Attorney with a 30-year career and 17 years at the Eastern District of North Carolina USAO, for filing a brief built on AI-generated fabrications. The judge wrote that Renfer “intentionally submitted a brief containing false materials to the court” and that “in this court, his name will be synonymous with a failure to uphold the basic duties of competence and candor expected of every attorney.” Renfer is now under investigation by the Department of Justice’s Office of Professional Responsibility. He had already left the U.S. Attorney’s office in March.
California’s about to make it enforceable
While courts have been escalating penalties case by case, California is moving the question into the rules themselves.
The State Bar’s Standing Committee on Professional Responsibility and Conduct (COPRAC) approved proposed amendments to six Rules of Professional Conduct on March 13, opened a 45-day public comment period, and closed comments on May 4. The rulemaking was triggered by an August 2025 letter from the California Supreme Court’s clerk to the state bar asking for AI-specific rules.
The Rule 1.1 amendment is the one that should get every managing partner’s attention. The new comment language says that when using technology, including AI, a lawyer “must independently review, verify, and exercise professional judgment regarding any output generated by the technology that is used in connection with representing a client.”
Read that one again. There’s no exception for routine work, no carve-out for low-stakes matters, no safe harbor for “I trusted the vendor.” I’ve sat with managing partners who assumed the rule would leave wiggle room for triage on what to verify and what to skim. There isn’t any.
Rule 1.6 is the other quiet bombshell. It expands the meaning of “reveal” to include “exposing confidential information to technological systems, including artificial intelligence tools” where there’s material risk the information could be accessed, retained, or used in ways inconsistent with confidentiality duties. Lawyers who paste client data into consumer AI tools with unfavorable retention terms now have a confidentiality problem on top of a competence problem.
Rule 3.3 codifies the verification duty for citations submitted to a tribunal. Rules 5.1 and 5.3 push AI governance up to managing partners and out to paralegals and other staff. None of this is novel ethics. It’s just that California is willing to write the obvious into rules that carry disciplinary force.
And courts aren’t waiting for California to formalize this. In Lacey v. State Farm last year, a federal Special Master in the Central District of California hit two AmLaw firms with about $31,000 in combined sanctions over AI-fabricated citations in their filings. The senior lawyer on the team formally accepted responsibility in writing for failing to supervise the AI-assisted research. The supervisory duty is already being enforced. California’s proposed Rule 5.1 would just write it into the disciplinary rules.
The proposals still need COPRAC review of public comments, then State Bar Board of Trustees approval, then sign-off from the California Supreme Court. They’re not yet in effect. But California has a habit of leading on professional responsibility, and if these go through, every other state bar gets a template.
What this changes for your firm
The math has changed. AI cuts research and drafting time by real numbers, and that isn’t going away. But the cost of skipping verification is no longer just an embarrassing post on legal Twitter. It’s a $15,000 fine. It’s six months when you can’t appear in your state’s highest court. It’s a letter to a federal judge that a partner’s name will be Googled with for the rest of her career.
Treat AI output the way you’d treat a brief from a smart but unsupervised first-year associate. Useful. Faster than starting from scratch. Absolutely not ready to file.
What to do Monday morning
Three things that move the needle without overhauling your practice.
Write the verification step into the workflow. Not a memo, not a poster in the conference room. A required checkbox in your matter management or document review process that says “every cited authority in this filing has been pulled, read, and verified by a human.” If you can’t track it, you can’t enforce it.
Update your AI policy to address consumer versus commercial terms. Most lawyers can’t tell you whether the AI tool they used last week trains on their inputs. I ask managing partners this all the time and the answer is usually a long pause. Your policy should name which tools are approved for which kinds of data, with the contract terms written down somewhere a partner can actually find them. The best contract beats the best model.
Train the supervisors, not just the associates. Lacey already proves the point. Partners need to know what AI tools their teams are using, what verification looks like, and how to catch a hallucination in a cite check before it ships.
The line is moving
Three years ago, citing a fake AI case got you a fine and a bad week. This month, it cost a Georgia prosecutor her standing before her state’s highest court. If California’s rule package goes through, doing the same thing in California becomes a Rules of Professional Conduct violation, with discipline that can run all the way to suspension.
Verification used to be a virtue. It’s about to be a rule. The lawyers who’ve built a system for it will be fine. The ones who haven’t are running on borrowed time, and the clock just sped up.
Three years ago, this kind of mistake got you a fine and a bad week. Now it costs a six-month suspension before your state's highest court. By summer, in California at least, it might cost a license. The firms that build a verification system this quarter will spend the next three years compounding the time savings AI gives them. The ones that don't will spend that time explaining themselves to judges, clients, and disciplinary boards. If you want to talk through what a workable verification process looks like for your practice before the rules force the conversation, reach out at steve@intelligencebyintent.com.


