AI Is Not Your Lawyer
A federal court ruling suggests conversations with generative artificial intelligence are not legally privileged.
A recent federal court decision underscores a simple point for the age of generative AI: the system answering your questions is not your lawyer.
The case, United States v. Heppner, a decision from the U.S. District Court for the Southern District of New York that circulated widely in legal commentary today, concerned documents created through prompts to an AI chatbot. The defendant argued those AI-generated materials should be protected by attorney–client privilege or the work-product doctrine. The court disagreed. What appears at first to be a narrow procedural ruling actually highlights a broader issue: generative AI changes the technological conditions under which confidentiality exists. For centuries attorney–client privilege depended on a simple assumption that conversations between lawyer and client remained private. Generative AI complicates that assumption.
What Actually Happened in the Case ▪
A federal grand jury indicted Bradley Heppner in October 2025 on securities-fraud and related charges.
After receiving a grand jury subpoena and realizing he was the target of the investigation, Heppner independently began using a public AI chatbot, Anthropic’s Claude, to analyze the government’s potential case and outline possible defenses.
Through those sessions he generated about 31 documents containing prompts, responses, and reports analyzing legal strategy and possible arguments.
When federal agents later arrested him, the FBI executed a search warrant at his home and seized electronic devices and documents.
Among the seized materials were the AI-generated documents reflecting his exchanges with the chatbot.
Heppner’s lawyers argued that the documents were protected by attorney–client privilege and the work-product doctrine, partly because he later shared them with his attorneys.
Prosecutors asked the court to rule that the materials were not privileged and could therefore be reviewed and used in the case.
Judge Jed Rakoff ruled that the documents were not privileged, allowing the government to examine them.
Privilege Requires Confidentiality ▪
Attorney–client privilege is one of the oldest protections in the legal system. It allows clients to communicate openly with lawyers without fear that those conversations will later appear in court. In American law the privilege generally requires three elements: a communication between a client and a lawyer, made confidentially, for the purpose of obtaining legal advice.
The Heppner decision concluded that AI conversations satisfy none of these requirements. First, a chatbot is not a lawyer. Generative AI systems may produce answers that resemble legal explanations, but they do not hold law licenses, owe no fiduciary duty to users, and are not subject to professional discipline. Without a lawyer–client relationship, the foundation of the privilege does not exist. Legal analyses of the ruling from firms such as Perkins Coie, Duane Morris, and Chapman & Cutler summarized the point bluntly: “AI is not your lawyer.”
Confidentiality creates the second problem. Privilege generally disappears when third parties enter the conversation. If a client shares legal advice with outsiders, courts often treat the privilege as waived. Generative AI introduces precisely such an intermediary. When a user submits a prompt, the information typically travels through infrastructure operated by the platform provider and may be logged, stored, or analyzed on remote servers. In the Heppner case, the court noted that the platform’s terms of service allowed the provider to collect and potentially disclose user communications. Because of that, the judge concluded that the defendant “could not have had a reasonable expectation of confidentiality.” What may feel like a private exchange on a personal device can legally resemble sharing sensitive information with an outside service.
AI Prompts May Become Evidence ▪
The practical consequences of the ruling appear most clearly in litigation. Modern legal disputes already rely heavily on digital discovery, where courts examine emails, text messages, and other electronic records stored on computers and phones. These materials often become central pieces of evidence, and generative AI prompts may now join that category.
In the Heppner case, investigators obtained documents created through AI prompts analyzing the defendant’s potential legal exposure. Because those communications were not privileged, the court allowed them to be used in the proceedings. Legal commentators quickly drew a broader conclusion: information entered into a public AI system may later become discoverable in litigation.
The decision is narrower than some early commentary suggests. The court did not rule that communications with generative AI systems will always lose privilege. Instead, the ruling focused on the specific circumstances of the case: a non-lawyer using a public AI system independently, rather than at the direction of an attorney. Under those conditions the court concluded that neither attorney–client privilege nor the work-product doctrine applied. Questions remain about whether the work-product doctrine could protect material created through AI when a lawyer — or a client acting at a lawyer’s direction — uses such tools to record legal analysis in anticipation of litigation. The ruling is also binding only within the federal courts of the Southern District of New York, although it may serve as persuasive authority elsewhere because it addresses an issue that courts have only begun to confront.
The issue is particularly sensitive for lawyers. Professional ethics rules require attorneys to safeguard confidential client information, and entering sensitive details into consumer AI systems could risk exposing privileged material. Courts and bar associations have already begun issuing guidance warning lawyers not to input confidential information into public AI tools and reminding attorneys that AI-generated material must be carefully verified. Although some commentators suggest that AI tools might someday be treated as agents assisting lawyers, similar to research assistants or consultants, those scenarios depend heavily on how the technology is deployed and controlled. Courts have not yet clearly defined where that boundary lies.
The broader lesson is that generative AI systems entered professional life faster than legal institutions could adapt their rules. Courts are now applying centuries-old doctrines of confidentiality to a technological environment that did not exist when those doctrines were created. Interacting with an AI system may feel like a private conversation because the interface resembles a chat window and responses arrive instantly in fluent language. In legal terms, however, a prompt is not merely a thought written in a notebook. It is information transmitted through infrastructure operated by another company, and once that intermediary enters the exchange longstanding assumptions about confidentiality can break down.
The Heppner decision shows how quickly courts are confronting these questions. Generative AI became widely accessible only a few years ago, yet judges are already applying traditional legal principles to determine how the technology fits within existing doctrine. The result is a reminder that while AI systems can summarize legal concepts, explain rules, and assist with drafting documents, they cannot replace the protected relationship that attorney–client privilege was designed to safeguard.
The ruling highlights a growing tension: AI tools increasingly function as quasi-advisors, yet the legal system still treats them as third-party platforms.
Further Reading
Federal Judge Holds Generative AI Communications Are Not Privileged

Thurgood Marshall United States Courthouse, New York City. Photo by Americasroof, Wikimedia Commons (CC BY-SA 3.0).