When AI Breaks Privilege: What United States v. Heppner Shows

A recent federal ruling shows what can happen when AI usage outpaces structure, confidentiality, and clear governance.

AI Risk is Getting Harder to Ignore

Law firms have spent the last two years asking what AI can help them do faster.

A more important question is now coming into focus:

What happens when AI is used in a way that weakens the protections legal work depends on?

That question became much more concrete in United States v. Heppner, where a federal court ruled that certain written exchanges with a public generative AI platform were not protected by attorney-client privilege or the work-product doctrine.

The decision is one of the earliest and clearest judicial signals that AI usage can create legal exposure when it is not structured carefully enough (Harvard Law Review).


What happened in Heppner

According to commentary on the ruling, the government sought access to documents reflecting a criminal defendant’s exchanges with Claude, the generative AI platform operated by Anthropic.

Judge Rakoff held that those materials were not privileged.

The reasoning was straightforward and worth understanding precisely.

In this case, the defendant was not communicating with counsel through the AI system, nor were the materials created at the direction of an attorney. The exchanges also did not clearly reflect a request for legal advice.

Those details mattered.

The court concluded that a public AI platform is not a lawyer, does not create an attorney-client relationship, and does not provide the kind of confidentiality privilege requires.

The court also concluded that the work-product doctrine did not apply because the materials were not prepared by or at the direction of counsel (Akin Gump).

Whatever nuance later cases may add, the practical signal is hard to miss:

The takeaway is not that all AI usage waives privilege.

It’s that privilege protections can break down quickly when AI usage is not clearly tied to legal representation.

The real lesson is not “never use AI”

It would be easy to read Heppner as a simple anti-AI case.

That would miss the more useful lesson.

The problem is not that AI exists. The problem is that legal AI usage is often happening without a defined structure around confidentiality, oversight, and approved use.

That distinction matters.

Heppner does not establish a blanket rule for all AI-assisted legal work.

But it does highlight how courts may analyze these interactions:

  • Is there a clear attorney-client relationship?
  • Is the communication made for the purpose of legal advice?
  • Is confidentiality preserved in a meaningful way?
  • Was the material created at the direction of counsel?

Without clear answers to those questions, privilege can be harder to assert.

If attorneys and staff are informally pasting matter-related facts, draft arguments, internal analysis, or sensitive communications into public tools, a firm may not have a technology problem.

It may have a governance problem.

Heppner makes that governance gap visible. Not because AI inherently breaks privilege, but because unstructured usage can make it difficult to demonstrate that privilege applies. (Inside Privacy).


Why this matters beyond one case

The risk exposed by Heppner is broader than a single defendant or a single platform.

It points to a larger operational issue inside firms:

  • attorneys may be using AI in inconsistent ways
  • there may be no clear policy distinguishing approved from unapproved usage
  • sensitive context may be entering tools without meaningful review
  • and there may be no reliable record of what was entered, why it was entered, or under what constraints

That is where legal risk starts to move from theoretical to practical.

Not because every AI interaction waives privilege.

But because unstructured AI usage makes it much harder to know when protections are being preserved and when they are being quietly weakened (Pillsbury).


Productivity without governance is not enough

Many firms already have access to AI tools.

Some have enterprise subscriptions. Some allow informal experimentation. Some are somewhere in between.

But access is not the same as governance.

A tool can improve speed and still leave critical questions unanswered:

  • Is this tool approved for matter-related use?
  • What kinds of information can be entered into it?
  • Are attorneys being guided by firm policy or personal judgment?
  • Can AI-assisted work be reviewed after the fact?
  • Is the firm relying on assumptions about confidentiality, or on actual controls?

Heppner matters because it forces those questions into the open.

This is not just a “public vs enterprise” distinction

It would be easy to view this as a problem specific to public AI tools.

In practice, the distinction is less about the tool itself and more about how clearly usage is defined, constrained, and tied to legal workflow.

Enterprise tools can provide stronger safeguards.

But they do not, on their own, establish how AI should be used in the context of privilege, client work, or internal review.


What a governed approach looks like

A governed AI environment does not begin with prompts.

It begins with structure.

That means defining, in operational terms:

  • which tools are approved
  • what categories of information can and cannot be used
  • how AI usage connects to matters and client work
  • what oversight exists for sensitive workflows
  • and how the firm preserves traceability when AI is involved

This is the missing layer in many firms today.

Without it, AI usage is left to individual discretion. With it, firms can move forward with far more clarity about how AI should be used and where the boundaries are.


The takeaway

United States v. Heppner is not important because it proves AI should be avoided.

And it does not stand for the idea that all AI usage, especially within controlled or enterprise environments, undermines privilege.

Many firms are already using tools with stronger privacy controls, clearer contractual protections, and more defined boundaries around data usage. That distinction matters.

What Heppner does show is how courts may approach AI interactions when those boundaries are not clearly established.

It highlights how privilege analysis still depends on familiar questions:

  • Was there a clear attorney-client relationship?
  • Was the communication made for the purpose of legal advice?
  • Was confidentiality meaningfully preserved?
  • Was the work done at the direction of counsel?

When those elements are present and well-structured, the analysis may look very different.

But when they are not—whether in a public tool or a loosely governed internal workflow—protections can become much harder to assert.

The takeaway is not about the tool itself.

It’s about whether the surrounding structure makes it clear how and why AI was used in the context of legal work.

More About AI Governance

If Heppner highlights the consequence, the broader issue is structural.

Limited Onboarding for
Core AI

Catapult works with a limited number of firms at a time to implement Core AI. Each engagement includes direct collaboration with our engineering and product teams.

Start a Conversation

Intro calls typically take about 25 minutes.