Most AI tools open with a friendly invitation: ask anything, in plain English. Somewhere near the bottom, in smaller print, they add that none of this is legal advice, that accuracy is not guaranteed, and that you remain responsible for whatever you do next. That tension—between the appearance of a personalized legal answer and the reality of “information only, no liability”—is one of the most important blind spots in modern legal practice.
Earlier in this series, we looked at how to use AI without getting yourself sanctioned and how to respond when the other side leans too hard on a chatbot. This installment turns to a quieter problem: how disclaimers and terms of use create an illusion of “free legal advice” that, legally speaking, is anything but.
What AI Tools Say They Are (and Aren’t)
If you scroll through the terms of use and disclaimers for most AI systems, you see a few common threads:
-Not legal advice. The output is for “general informational purposes only,” not tailored legal advice.
-No lawyer–client relationship. The provider disclaims creating an attorney–client relationship or any duty of loyalty, confidentiality (beyond what’s in the privacy policy), or competence.
-No guarantee of accuracy. Responses may be incomplete, outdated, or flat-out wrong, and users are told to verify anything important before relying on it.
-User assumes the risk. The user is responsible for how they use the tool and for complying with all applicable laws and ethical rules.For a sophisticated user, those are bright red flags.
For an unsophisticated user, or a time-pressed lawyer, they’re background noise. The interface feels like a conversation with a knowledgeable professional. The disclaimers make it clear it’s closer to a very smart search engine that sometimes guesses. That gap between perception and legal reality is where trouble lives.
Why Disclaimers Won’t Save Users
From the user’s perspective, disclaimers are often treated as a kind of safety net: “If they say it’s not legal advice, that must protect me too—right?” Unfortunately, no.
If you are a lawyer, your professional obligations do not change because you used a tool with a disclaimer. Courts and ethics opinions have been clear that you cannot outsource competence, diligence, or candor. You are expected to:
-Read and understand the terms of the tools you use.
-Independently verify any legal or factual content you derive from them.
-Take responsibility for anything you sign or file, regardless of how it was drafted.
In sanctions cases involving AI-generated hallucinations, the judge’s analysis has generally focused on what the lawyer did—or failed to do—with the output. But courts are now going further. In privilege disputes, they are reading the fine print that users routinely ignore. In United States v. Heppner, Judge Rakoff of the Southern District of New York, examined Anthropic’s own privacy policy—which reserves the right to log prompts and outputs, use them for model training, and disclose data to third parties—and held that submitting confidential information to a platform with those express terms was inconsistent with any reasonable expectation of privacy. The terms of service, in other words, can be used against you even when you never read them.
If you are a pro se litigant or a member of the public, the news is not much better. The disclaimers that protect the AI company tend to undermine any argument that you reasonably relied on the tool as a lawyer or as a source of accurate legal advice. Judges are already telling self-represented parties that “my chatbot told me to” is not a defense to sanctions or a reason to reopen a case.
The new Nippon Life v. OpenAI lawsuit is an early test of how far this can go. Nippon alleges that ChatGPT effectively acted as an unlicensed lawyer by encouraging a former beneficiary to try to reopen a settled case and by drafting motions and arguments that triggered a wave of meritless filings. Whatever happens in that case, the mere fact that a major institutional insurer is arguing that an AI tool crossed the line into unauthorized practice dramatizes the conflict between the way these systems are marketed (helpful, conversational, “ask anything”) and the way they shield themselves legally (no advice, no duty, no liability). For now, though, Nippon is a live controversy, not a roadmap. Until courts actually hold an AI provider liable, users should assume that disclaimers will be enforced as written.
Heppner, Warner, and the Privilege Trap
Two recent federal court rulings drive the point home in the privilege context.
In Heppner, Judge Rakoff held that AI-generated documents a criminal defendant created with a public tool—summarizing his exposure and defenses—were not protected by attorney–client privilege or work product, even after he shared them with his lawyer. Judge Rakoff’s analysis went beyond who directed the work. The court examined Anthropic’s privacy policy directly and found that by using a platform that expressly reserves the right to log prompts and outputs, train on user data, and disclose information to third parties, the defendant had no reasonable expectation of confidentiality. Any privilege that might otherwise have attached was waived the moment he typed his question into the chat window. The court also held, as an independent ground, that materials created outside attorney direction were not “by or on behalf of” a lawyer and therefore not privileged at all.
In Warner v. Gilbarco, decided the same week, another federal court held that AI-assisted drafts were protected work product where a pro se litigant used an AI tool under an attorney-consultant’s guidance, and the materials reflected the litigant’s legal strategy in anticipation of litigation. The court reasoned that work-product protection turned on whether disclosure made it likely the materials would reach an adversary, not on the mere fact that an AI tool had been involved.
Read together, Heppner and Warner send a blunt message: consumer AI is not your lawyer, but using it can absolutely affect whether your documents are discoverable. The same “ask anything” interface can produce unprivileged, fully discoverable drafts in one context and protected work product in another, depending on who directs the work, how the tool is configured, and what you do with the output.
One practical variable is worth flagging. The Heppner court’s analysis turned specifically on the consumer-tier privacy policy, which permits the platform to log inputs, train on user data, and disclose information to third parties. Not all configurations of the same tool carry the same terms. Enterprise and paid-tier agreements for many AI platforms, including some of the most widely used ones, contractually exclude user inputs from training, impose stricter data retention limits, and make confidentiality commitments that consumer tiers do not. Whether those distinctions would have changed the outcome in Heppner is an open question, but they speak directly to the court’s reasoning about reasonable expectations of confidentiality.
Before using any AI tool in connection with a client matter, check which tier you are on, what the applicable privacy terms actually say, and whether you have opted out of data collection for training purposes. Then make sure your engagement letter reflects it!
Why Disclaimers Won’t Save Lawyers
Lawyers sometimes talk about adding AI clauses to their engagement letters as if that alone solves the problem: “We may use AI to assist with your matter, but we won’t be responsible for errors in its output.” That kind of clause might be useful for transparency, but it does not override your ethical duties.
Bar regulators and commentators have stressed a few baseline expectations:
-You must read and understand any tool’s terms, especially around confidentiality and data use.
-You must supervise the tool’s “work” the same way you would supervise a junior lawyer or contract researcher.
-You cannot agree with a client to provide incompetent representation, even if they’re eager to save money by letting you rely more heavily on automation.
In other words, an AI disclaimer in your engagement letter can explain how you use AI, but it cannot excuse you from the consequences if you misuse it. The duty of competence runs to you, not to the model.
Heppner and Warner add another dimension. If your client independently uses a public AI tool to “prep” their case, the resulting drafts may well be unprivileged and discoverable—even if they later send them to you. If you direct or supervise their use of AI as part of your litigation strategy, those same tools may generate material the court will treat as work product. The platform’s terms of service are now part of that analysis. A tool whose privacy policy allows it to retain and train on user data is a tool that a court may treat as a third party—and sharing privileged information with a third party is a waiver, disclaimer or no disclaimer.
What Good AI Disclaimers Can Actually Do
If disclaimers and terms of use are not shields, what are they good for? Used thoughtfully, they can still play an important role in reducing confusion and aligning expectations.
For law firms and in-house legal departments, a well-crafted AI disclaimer or policy can:
-Signal transparency. Clients increasingly assume their lawyers are using AI. Telling them where it might be used (for example, summarizing discovery, drafting early outlines) and where it will not (final legal analysis, strategic decisions) builds trust.
-Draw clear lines. You can explain that AI-generated drafts are always reviewed by a licensed lawyer, that confidential information will not be fed into public systems without consent, and that strategic advice comes from humans.
-Reinforce accountability. Language along the lines of “We may use AI tools as part of our internal workflow, but your lawyers remain fully responsible for the advice and documents we provide” makes it clear that AI is a helper, not a substitute.
For businesses or non-lawyer services that deploy legal-adjacent chatbots—HR tools, compliance FAQs, form-builder sites—disclaimers are even more critical. They need to make it obvious that:
-The service provides general educational information, not specific legal advice.
-The user’s situation may be materially different from the scenarios the tool describes.
-Users should consult a lawyer before taking action in high-stakes situations.
That does not eliminate legal risk, but it reduces the chance that a user will reasonably believe they are now “represented” or that the output is guaranteed to be correct.
Drafting AI Clauses in Engagement Letters
For law firms, the better question is not “How do we disclaim AI?” but “How do we explain AI use in a way that protects the client and us?” Here is language you can adapt for your own engagement letters—these clauses address the issues that tend to matter most:
-Use of tools. “Our firm may use generative AI tools and other software to assist in drafting, research, and document management. These tools are used under attorney supervision.”
-Confidentiality. “We will not input your confidential information into public AI systems without taking steps to protect its confidentiality, such as anonymization, contractual safeguards, or use of private or enterprise instances that exclude user data from training and impose stricter retention obligations than consumer versions of the same tools.”
-Responsibility. “Regardless of the tools used, your attorneys are responsible for the legal advice provided and for the content of any filed documents.”
-Client-side AI. “We do not recommend that you rely on public AI tools for legal advice regarding your matter. If you choose to do so, please share any AI-generated content you wish us to consider so we can evaluate it independently.”
This kind of language does not change your underlying duties, but it does make the conversation explicit. It also gives you a foothold if you later need to explain to a court or regulator how you’ve tried to use AI responsibly.
Talking to Clients About Their Own AI Use
The other half of the disclaimer story is client-facing. Clients are experimenting with AI whether we like it or not. Some will show up with AI-drafted contracts, complaints, or letters; others will quietly “check” your advice against whatever the chatbot says.
Part of modern competence is getting ahead of that by:
-Normalizing the conversation. Instead of scolding clients for using AI, ask them early: “Have you used any online tools or AI to research this issue?” That tells you what assumptions you’re working against.
-Explaining limits in plain English. You can say, “These tools are great at explaining concepts in general terms, but they don’t know the latest Arizona cases, your judge, your opposing counsel, or your risk tolerance. That’s what you hired us for.”
-Distinguishing strategy from information. It is reasonable for a client to use AI to understand what “summary judgment” means. It is dangerous for them to use it to decide whether to accept a settlement offer. Make that distinction explicit.
You do not need a mini-CLE in every client meeting, but a few sentences about what AI can and cannot safely do can prevent a lot of downstream confusion and disappointment.
The Illusion We Have to Break
AI interfaces are designed to feel conversational and confident. The answers are in full sentences, not headnotes. The tone is authoritative. For many users, that feels closer to “getting advice” than reading a statute or a treatise, even though the actual legal reality is the opposite.
The job of lawyers, courts, and regulators in the next few years is to puncture that illusion without throwing away the real benefits these tools can offer. That means:
-Courts continuing to say plainly: whoever signs the brief owns the content, AI or no AI.
-Bar authorities reminding lawyers that reading and understanding terms of use is part of basic tech competence.
-Lawyers incorporating AI use—and its limits—into engagement letters, client conversations, and firm policies.
-Everyone, lawyer and layperson alike, treating “this is not legal advice” as a warning label, not white noise.
AI is not going away. Nor is the temptation to treat a fast, confident answer as a free consultation. But disclaimers are written for the company’s lawyers. The consequences fall on yours. If any of this raises questions about how your firm is handling AI—in engagement letters, in active matters, or in the guidance you give clients—we are glad to have that conversation.
A Word About SilverCain
Silver Cain PLC was founded on the premise that businesses deserve both exceptional litigation experience and direct partner access — and that you should not have to choose between them. Leon Silver and Rebecca Cain have spent decades handling the most complex business and real estate disputes in Arizona and nationally. If you are evaluating counsel in Phoenix, we welcome the conversation.

