The first time a client sent me AI “research,” it arrived as a 10-page, single-spaced memo attached to a 6 a.m. email. “Take a look at what ChatGPT says about our case,” the client wrote. “It thinks we should push for a much broader release, and it found three cases that sound perfect for us. Can you work this into our strategy?”
On a quick skim, the memo looked impressive. It had headings, citations, and confident language about “clear precedent” in our favor. Within a few minutes, the cracks started to show. One case didn’t exist. Another was real but from the wrong jurisdiction and a completely different posture. And the “recommended” release language would have blown up three weeks of hard-fought negotiations and reopened business points my client had already agreed to.
None of this was malicious. The client was trying to help — to save time, save fees, and feel more in control of a stressful situation. But it was a perfect example of what AI does so well and so badly at the same time: it produced something that looked like a polished legal work product, with none of the context, judgment, or deal history that actually makes it safe to rely on.
Clients are not crazy for experimenting with AI; they are doing exactly what the marketing promises, using a tool that feels instant, free, and impressively confident. To most users, it looks like “Google, but better,” when in reality it is predicting likely-sounding text, not actually running a careful legal research search. For lawyers, that means a new normal where clients arrive with their own draft contracts, “case law,” or AI-generated risk analyses and expect you either to bless them or explain why you disagree.
Clients are going to keep discovering AI. The real question for lawyers is: What do you do next, and how do you turn that “help” into something that actually advances the client’s interests instead of quietly undercutting them?
The Legal Traps Hiding In AI “Research”
The real danger is not that AI is always wrong; it is that it is wrong in ways that are hard for non-lawyers to see and tempting for busy lawyers to overlook.
- Hallucinated authority. AI tools will invent cases, quotations, and even courts with total confidence when they hit gaps in their data or are pushed for specific citations. That is how lawyers have ended up embarrassed or sanctioned for filing briefs supported by “authorities” that never existed.
- Over-generalized rules. Even when AI cites real authority, the rules it extracts are often oversimplified and stripped of jurisdiction, posture, and factual nuance. It may sound like black-letter law when it is actually a narrow holding from a different context.
- No sense of deal history. In transactions, AI treats every contract as a blank-slate negotiation. It has zero visibility into what your client already agreed to, the last-minute compromises that held the deal together, or the specific risk allocation that drove the economics. That is how you get suggested changes that blow up tax allocations, payment structures, or release frameworks the client already accepted.
- Missing real-world context. AI does not know the personalities, risk tolerances, politics, or timing pressures in your matter unless you explicitly explain them — and even then, it cannot calibrate like a human. It will happily recommend “fight to the death” positions to a client who actually needs a quiet settlement, or “market” terms that may be market only in another industry or jurisdiction.
- Privilege and confidentiality risk. Public AI tools often store prompts and responses. Feeding detailed fact patterns, draft agreements, or litigation strategies into a public chatbot can jeopardize confidentiality and potentially threaten privilege, especially if that data is later used to train or tune the system.
From an ethics standpoint, lawyers remain fully responsible for the work product, even if the first draft came from the client’s favorite chatbot. If it goes into your brief, contract, or demand letter, you own it.
“My Friend Gave Me the Research”
A growing twist on the “client AI memo” problem is when the analysis comes from “a friend who’s good with this stuff” or a brother-in-law who decided to run your case through a chatbot. Those situations raise a different layer of risk: not just whether the AI output is any good, but what the client shared to get it — and who now has it.
When a client forwards your emails, briefs, or draft agreements to a non-lawyer friend or relative so that person can “run it through AI,” several issues show up at once.
- Privilege and confidentiality risk. Attorney-client privilege depends on confidential communications staying within the circle of the client and their counsel (plus true agents necessary to advance the representation). Sharing those communications with a third party who is just a curious friend or informal advisor can waive privilege and make those emails or drafts fair game in discovery.
- No “privileged relationship” with AI. A friend pasting privileged emails or strategy into a public chatbot does not create any kind of protected channel. AI tools are not lawyers, are not part of the legal team, and do not sit inside any recognized privilege framework. Their logs and outputs may be accessible to providers or, down the line, to opposing parties.
- Discoverability of the “shadow file.” Once a friend or relative has their own AI-generated memo, that document can itself become part of the story in litigation or negotiation. Opposing counsel may argue that it shows what the client was told, what they understood, or that they shopped for advice outside the attorney-client relationship in ways that undercut your strategy.
- Quality control on both inputs and outputs. The friend is usually not a lawyer and has no obligation to preserve nuance, flag uncertainties, or avoid misstatements of law. They can easily feed incomplete or misleading facts into AI, get a confident-sounding answer back, and hand your client a document that looks like legal advice but is based on half the story.
For lawyers, the first step when a client shows you “what my friend’s AI found” is to slow the conversation down and get clarity: What did they send? To whom? Using what tool? That determines how serious the privilege and discoverability issues might be and whether remedial steps (like clawback agreements or protective orders) are worth exploring, especially in high-stakes or heavily litigated disputes where discovery battles are more intense and every document trail is likely to be scrutinized.
How To Talk About This With Clients
If the audience is both lawyers and clients, the key is tone: appreciative, not condescending, but very clear about the limits.
- Start by validating the intent. A simple, “Thanks for sending this — AI can be a useful way to surface issues and stress-test our thinking,” acknowledges the effort and keeps the client from feeling shut down.
- Reframe AI as a checklist, not a decider. Explain that AI is great at generating lists of possible issues, alternative clauses, or arguments, but it has no idea which ones fit this deal, this jurisdiction, or this judge.
- Explain the “context problem” in plain English. For example: “This summary doesn’t know what you already negotiated, what the other side will never accept, or the tax and financing constraints you are under. Some of its suggestions would actually move you backward, not forward.”
- Draw a line between information and advice. Clients need to hear that “legal information” from AI is not “legal advice” tailored to their situation, and that your job is to do the tailoring.
This framing lets you respect the client’s effort while making clear that the machine is not a silent partner in the representation.
Practical Strategies When Clients Bring AI
Once the AI memo lands in your inbox, you need a way to handle it that is efficient, defensible, and preserves the relationship.
- Set expectations in writing. Engagement letters can include a short provision explaining the risks of using public AI for legal issues, discouraging clients from feeding confidential information into such tools, and clarifying that you cannot rely on AI “advice” without independent review.
- Triage, don’t line-edit. Instead of rebutting every AI bullet, group the output into buckets: “already addressed,” “potentially useful cleanup,” and “not appropriate for this deal/case.” That keeps the conversation high-level and avoids getting dragged into a point-by-point argument with a chatbot.
- Document your judgment, not the AI debate. When you respond, focus on clear, business-oriented explanations: which concepts you are adopting, which you are rejecting, and why. Avoid framing it as “AI says X, I say Y”; instead, tie your advice to client goals, risk, cost, and the actual procedural or deal posture.
- Watch the liability trap. Extensive written back-and-forth about AI suggestions can later be spun as “the client raised a risk and the lawyer dismissed it,” even when the AI risk was a mirage. Clear notes about why an issue is immaterial, already addressed, or inconsistent with the client’s objectives help mitigate that.
- Use client AI drafts strategically. Client-generated letters or contracts can sometimes save time on structure or background facts, but they almost always require a ground-up legal review. Be upfront that “editing” an AI draft is often more work than starting from a trusted template, so expectations about cost and turnaround stay realistic.
The through-line for lawyers: treat client AI content like something a smart but untrained intern wrote after skim-reading a few cases. It might contain useful nuggets, but nothing goes out under your name without full vetting.
Using AI Well Without Letting It Drive
For both lawyers and clients, the better conversation is not “AI: yes or no?” but “AI: how, where, and under whose supervision?”
- For lawyers, emerging ethics guidance points in the same direction: AI is a tool, not a substitute. Competence, confidentiality, supervision, and independent judgment still belong to the human attorney.
- For clients, good questions to ask are: How is my firm using AI, how is my data protected, and when will a human actually look at the output before it affects my matter? That keeps AI in its lane: accelerating routine tasks while lawyers handle nuance, strategy, and judgment.
- For the relationship, the healthiest dynamic is collaborative. Clients can use AI to organize thoughts, draft timelines, or identify questions; lawyers then test those ideas against the law, the facts, and the actual deal or litigation strategy.
Handled that way, AI becomes a slightly over-eager junior helper that everyone can benefit from, rather than a silent third chair at the table whose work no one is really supervising.
Leon Silver and Rebecca Cain are experienced trial lawyers at Silver Cain PLC focused on commercial and real property disputes as well as general business matters.
Reach them at lsilver@silvercain.com, rcain@silvercain.com
Silver Cain, PLC is an Arizona based boutique law firm with recognized excellence in commercial and real estate related disputes. The firm represents businesses, investors, and professionals in sophisticated legal matters throughout the United States, providing nuanced strategies and high-caliber advocacy in both state and federal courts, arbitrations and mediations. Our attorneys work closely with business owners, investors, and individuals to protect their interests and find the best path forward, making Silver Cain, PLC a trusted resource for clients and fellow professionals alike.

