When AI Makes Things Up: What Mata v. Avianca Actually Shows
A widely cited case of AI-generated fake citations reveals a deeper issue: not the tool, but the absence of structured review and accountability.
A case that made AI risk visible overnight
In 2023, a case in federal court drew widespread attention for a simple reason:
An attorney submitted a brief citing multiple cases that did not exist.
The citations looked real.
They were formatted correctly.
They included plausible legal reasoning.
But they were entirely fabricated.
The source of those citations was an AI system.
The case, Mata v. Avianca, Inc., quickly became a reference point for the risks of using AI in legal work.
What happened
In Mata v. Avianca, attorneys submitted a filing that relied on case law generated by an AI tool.
When the court requested copies of the cited opinions, the attorneys were unable to produce them.
The cases had been generated, not retrieved.
The court ultimately found that the attorneys had failed to verify the accuracy of their citations and imposed sanctions as a result.
The core issue was not subtle.
Non-existent legal authorities were presented to the court as real.
The easy takeaway, and why it falls short
The case is often summarized in a single sentence:
AI makes things up.
That’s true. But it’s not especially useful.
Every legal tool, AI or otherwise, requires verification.
Attorneys already understand that.
Framing the issue as a failure of the tool misses the more important question:
How did those citations make it into a court filing in the first place?
Not just a technical failure
The problem in Mata was not simply that the AI system produced incorrect output.
It was that nothing in the workflow caught it.
There was:
- no structured validation step before submission
- no clear boundary between AI-generated content and verified authority
- no defined responsibility for confirming accuracy
- no system for tracing how the output was produced
The issue was not that the AI made things up.
It was that the process allowed unverified output to be treated as reliable.
Where structure matters
In any professional workflow, errors are expected.
What matters is how those errors are handled.
A structured approach to AI usage does not assume perfect outputs.
It assumes outputs require review.
That means defining:
- when AI can be used in legal research or drafting
- what level of verification is required before use
- how AI-generated content is distinguished from confirmed authority
- who is responsible for reviewing and approving outputs
- and how the underlying process can be understood after the fact
Without that structure, the burden falls entirely on individual judgment.
Not just about hallucinations
It would be easy to treat Mata as a one-off example of an attorney misusing a new tool.
But the underlying issue is broader.
As AI becomes more embedded in legal workflows, similar failure points can emerge in less obvious ways:
- summaries that omit key context
- draft language that introduces subtle inaccuracies
- analysis that appears well-reasoned but is incomplete
- outputs that are accepted without clear validation
The risk is not limited to fabricated cases.
It is any situation where output is accepted without a defined process for review.
Tools don’t define the workflow
Many firms now use AI tools with stronger safeguards, improved accuracy, and enterprise-level controls.
That matters.
But those improvements do not, on their own, define how AI should be used in legal work.
A more capable tool does not replace the need for:
- clear usage boundaries
- defined review steps
- accountability for outputs
- and visibility into how results are produced
Those elements exist at the workflow level, not the tool level.
The takeaway
Mata v. Avianca is often cited as an example of what can go wrong with AI.
A more useful interpretation is this:
The issue was not that the AI produced incorrect information.
The issue was that the workflow had no mechanism to catch it.
As AI becomes a more common part of legal work, that distinction matters.
The question is not whether AI can produce imperfect output.
It is whether the firm has a clear, consistent way of ensuring that output is reviewed, validated, and understood before it is relied on.
More About AI Governance
If this case highlights how failures occur at the workflow level, the broader issue is how AI usage is structured across the firm.
AI Productivity Tools Don’t Solve Governance
Most legal AI tools improve speed. Few define how AI is used, reviewed, or governed.
Read articleThe Missing Layer in Law Firm AI Adoption
Law firms are adopting AI quickly. But most are doing so without a clear structure for how it’s used, governed, or reviewed.
Read article