Courts Aren’t Punishing AI Use. They’re Punishing Untraceable AI Use.
A recent sanctions case shows the real risk is not AI itself, but the inability to explain how it was used.
Litigation Risk
For a while, most conversations around AI risk in legal work have been theoretical. A recent sanctions decision brings the issue much closer to practice.
In Flycatcher Corp. Ltd. v. Affable Avenue LLC, a federal judge issued case-ending sanctions against an attorney who repeatedly submitted filings containing false, AI-generated citations.
The court did not take issue with the use of AI itself. It took issue with something more specific.
Unchecked usage. Repeated errors. And a failure to take responsibility for verifying what was filed.
The decision is summarized in a recent article from Greenberg Traurig.
What Went Wrong
At a surface level, the issue looks familiar.
An attorney used AI-generated content that included fabricated citations. But the court’s reasoning goes further than a simple warning about hallucinations.
The attorney:
- submitted filings with false citations
- was warned by the court
- submitted additional filings with similar issues
- and ultimately could not clearly explain or defend the process behind the work
That pattern mattered.
The court emphasized that while AI can be used in legal work, attorneys remain responsible for what they submit.
Verification is not optional.
Accountability does not transfer to the tool.
Not Anti-AI
It would be easy to read this as another example of why AI should not be used in litigation.
That is not what the court is saying.
The judge explicitly acknowledged that AI can be useful for research and drafting.
The issue is not usage.
The issue is uncontrolled usage without a verifiable process behind it.
That distinction is important.
Because most firms are not deciding whether to use AI anymore.
They are already using it.
The real question is how that usage is structured.
The Real Risk
Hallucinations are a symptom.
The deeper risk is something else:
The inability to explain how AI influenced the work
When a filing is challenged, the question is no longer just:
- “Is this citation accurate?”
It becomes:
- How was this generated?
- What tools were used?
- What review process was followed?
- Who was responsible for verification?
In this case, there were no clear answers.
And without those answers, the court was left to draw its own conclusions.
That is where risk escalates.
Not because AI exists.
But because the process around it is invisible.
Where Most Firms are Today
Across firms, AI usage is already happening.
But often:
- usage is informal and inconsistent
- prompts and outputs are not tracked
- review expectations are unclear
- and there is no reliable record of how AI contributed to a work product
That creates a gap.
Not a technology gap.
A governance gap.
Productivity Without Traceability = Exposure
Many firms have access to capable AI tools.
Some are using enterprise platforms. Others are experimenting more informally.
But access alone does not answer the questions that matter under scrutiny:
- Can the firm reconstruct how a document was produced?
- Can it show what role AI played?
- Can it demonstrate that appropriate review took place?
- Can it tie usage back to firm policy?
Without that, the firm is relying on individual judgment and memory.
And that becomes fragile quickly.
More Than Just Public Tools
It would be easy to frame this as a problem specific to public AI systems.
But the issue is broader than that.
Even in more controlled environments, the same questions remain:
- Is usage tied to a defined workflow?
- Are policies actually reflected in how AI is used?
- Is there visibility into what happened after the fact?
The presence of a “safer” tool does not, on its own, create accountability.
Structure does.
What a Defensible Approach Looks Like
A more defensible model does not start with better prompts.
It starts with clarity.
- AI usage tied to matters and client work
- defined boundaries around what is appropriate to generate
- clear expectations for review and verification
- and a record of how AI contributed to the final output
In other words:
Not just using AI.
But being able to explain its role after the fact.
Takeaway
This case is not important because it shows AI can make mistakes. That is already understood.
It is important because it shows how courts respond when attorneys cannot explain the process behind AI-assisted work.
The issue is not whether AI was used. It is whether its use is visible, reviewable, and defensible.
The questions firms should be asking are not:
Are we using AI? Should we stop?
They are:
If challenged, can we clearly explain how AI was used in this work? Is our work product accurate? Or are we assuming it is?
That is where the risk now lives.
More About AI Governance
If Heppner highlights privilege breakdown, this case highlights accountability.