There's a moment in the February 2026 Nebraska Supreme Court hearing that's worth sitting with.
An attorney named Greg Lake is mid-sentence, explaining why his appellate brief contained dozens of citations to cases that either didn't exist, had completely fabricated quotes, or bore no relation to the points they were supposed to support. He blamed a broken laptop. He blamed a hurried anniversary trip. He blamed uploading the wrong version of the document.
Then a justice leaned forward and asked the question nobody in that room wanted to dodge.
"The elephant in the room is whether or not you used artificial intelligence. Did you?"
Greg Lake said no.
That answer — not the hallucinations, not the 57 defective citations out of 63, not the three cases that existed in no jurisdiction on earth — is what made this story something different from the dozens of AI-in-the-courtroom stories that came before it.
What Actually Happened
The case itself was unremarkable. A divorce proceeding that had been grinding through the Nebraska courts since 2013, eventually going to trial in 2025 over two contested issues: when exactly marital assets should be divided, and custody of a minor child.
Lake filed the appeal on behalf of the husband. What the opposing counsel found when they read the brief was not a badly argued case. It was something stranger — a brief that looked professionally constructed at a glance but dissolved on contact with any serious legal research tool.
Of the 63 references Lake had made, 57 contained some form of defect. Twenty were what courts have taken to calling hallucinations — the AI term for when a language model generates confident-sounding output that has no basis in reality. These weren't typos or wrong page numbers. They were citations to cases that had never been decided, with quotes attributed to judges who had never written those words, supporting propositions that the cited case law had never established.
In one example the justices specifically flagged, Lake cited the 2019 case Kennedy v. Kennedy to make a point about parental custody. There was no such case. The quotes he attributed to it were fabricated. The court noted, with what reads as barely contained disbelief, that the mistakes "could have been easily discovered using traditional legal research services."
That observation matters. This wasn't a situation where a lawyer pushed a frontier tool to its limits and got burned. This was elementary verification that never happened.
The Lie Made It Worse
When the Nebraska Supreme Court referred the matter to the state's Council for Discipline in March, Lake had already told the justices he hadn't used AI. He repeated this position multiple times. His story — the anniversary, the broken computer, the wrong file — held for weeks.
Then, two days before his suspension was ordered in April, Lake sent an affidavit to the court. For the first time, he admitted to using a generative AI tool to write the brief. He called it, in a phrase that will probably follow him for the rest of his career, "a grave error of judgment for failing to be forthright with the court."
The Nebraska Supreme Court's Chief Justice Michael Heavican issued a one-page suspension order shortly after.
What's significant here is the sequencing. The hallucinations cost Lake's client Jason Regan $52,000 in opposing counsel fees. The initial lie — the denial of AI use when confronted directly by the court — drove the severity of what came next. Researcher Damien Charlotin at HEC Paris, who maintains the most comprehensive public database of AI hallucination cases in legal proceedings, now tracks more than 1,353 such cases globally. He's described the current pace as "ten cases from ten different courts on a single day." What distinguishes the Nebraska case in his database isn't the hallucinations. It's that the cover-up was treated as a separate and more serious violation.
This Isn't Just a Legal Story
If you work in tech and your instinct reading this is quiet relief that courtrooms are someone else's problem, worth examining that instinct.
The legal profession was the first high-stakes regulated environment where AI tools got widely deployed, where the outputs were put directly in front of decision-makers who had both the expertise to identify errors and the authority to impose consequences. What's happening in courts right now is the earliest and clearest signal of where every other regulated professional context is heading.
Over 35 state bar associations have now issued guidance requiring attorneys to verify AI-generated content before filing. Multiple federal courts mandate disclosure of AI use in court documents. The logic being established in these rulings — that a professional cannot outsource verification to the tool that generated the content — will not stay inside courtrooms.
Consider what this pattern looks like in adjacent professions. A financial analyst who submits a report citing non-existent earnings calls. A medical professional who documents a patient encounter using AI-generated notes that cite studies that don't exist. A software engineer who deploys AI-written code into production systems without review, and something fails. The professional liability framework that the legal profession is building right now around AI use is going to be imported, adaptation by adaptation, into every field that has one.
The Nebraska case also clarifies something that people building AI-assisted workflows often get wrong. The question is not whether to use AI. Nobody is arguing that attorneys shouldn't use AI to assist their work. The question — the only question that matters for professional use — is whether the human who put their name on the output actually verified it.
What $145,000 in Sanctions in One Quarter Tells You
The Nebraska suspension is not an isolated incident. It is the most visible point on a trend line.
US courts imposed over $145,000 in AI hallucination sanctions in the first quarter of 2026 alone. Oregon holds the largest aggregate sanction tied to a single attorney for AI-related filing errors at $109,700. The Sixth Circuit imposed a $30,000 fine on two Tennessee attorneys in the largest federal appellate sanction yet linked to fabricated citations.
Nebraska's indefinite suspension — if upheld — is the first bar discipline action to suspend practice entirely over AI-related filing errors in the US. The consequence has escalated from financial penalty to career suspension.
The pattern across these cases is consistent enough to be instructive. In the Oregon case, a Portland attorney was sanctioned for AI-contaminated briefs that she neither wrote nor reviewed. The tool that generated the hallucinations in one federal case wasn't a free consumer chatbot — it was Thomson Reuters CoCounsel, an enterprise legal AI product from one of the most established names in legal research. Product tier doesn't determine verification obligation.
The Actual Lesson
Here's what separates the professionals who will navigate this era well from the ones who won't.
It's not whether you use AI. Nearly everyone is using it or will be shortly. It's not which tool you use. Enterprise products hallucinate just as freely as consumer ones when pushed outside their training distribution. It's not whether you're careful in general — Greg Lake appears to have been a competent attorney for a long time before this.
The lesson is much simpler and harder than people want it to be: if you put your name on output generated with AI assistance, you are responsible for every word of it. Not responsible in a theoretical, terms-of-service sense. Responsible in the way that a Nebraska Supreme Court justice will look at you and ask, directly, whether you used artificial intelligence to write the document you just filed.
And then wait to see what you say.