Not long ago, a senior in-house counsel at a major international company called me after reading this series. Her question wasn’t whether her legal team should be using AI. They already were. Her question was more pointed: how are you using it, and is what you’re doing actually different from what everyone else claims to do?
It’s the right question. And the answer, for most firms, is not particularly encouraging.
“We use AI” has become almost meaningless as a statement. It covers everything from a partner who occasionally types a question into ChatGPT to a firm that has built custom litigation tools from the ground up. The phrase is true in both cases and informative in neither.
It is worth noting that every article in this series has grown from a real conversation — with a client navigating an AI-generated contract dispute, with a judge who read an AI platform’s terms of service before ruling on privilege, with opposing counsel whose AI-assisted filing created problems neither of us anticipated. This one started with a phone call.
There is a better way to think about it. After working deeply in this space — presenting at national legal conferences, collaborating with AI technologists, and building tools specifically for litigation and transactional practice — I have come to think about AI use in legal practice in four levels. They are not a ranking of effort or ambition. They are a description of what AI is actually doing for lawyers at each stage, what the limitations are, and what clients are — and are not — getting.
The Foundational Level: AI as a Fast Assistant
This is where most lawyers who use AI at all are operating, and there is nothing wrong with starting here. At the Foundation level, AI functions as a tireless assistant that does what humans do — just faster.
The common applications: summarizing lengthy documents such as deposition transcripts, expert reports, and contracts; drafting initial versions of routine correspondence, motions, and demand letters; and answering general research questions quickly. The output always requires human review and judgment, and the tools are largely off the shelf — ChatGPT, Microsoft Copilot, or the AI features now embedded in Westlaw and Lexis.
The value is real. A task that used to take a junior associate three hours might take twenty minutes. The limitation is equally real: the AI knows nothing specific about your client, your judge, your opposing counsel, or this case. It knows what a demand letter looks like in general. It does not know what this demand letter needs to accomplish.
At the Foundation level, AI is a productivity tool. It does not change what you can do. It changes how long it takes.
The Intermediate Level: AI with Institutional Knowledge
The step up from the Foundation level is building AI tools that know something specific — about your practice area, your client’s situation, or your firm’s own work product. Lawyers at the Intermediate level are no longer just prompting a general tool; they are configuring AI systems to work within their practice.
Examples include AI contract review platforms calibrated to specific industries, document management systems that allow AI to search your firm’s internal work product, and research workflows that analyze an opposing party’s litigation history or filing patterns. AI-assisted discovery review — predictive coding, issue tagging, relevance ranking across large document sets — sits here as well.
This level requires real investment and some technical sophistication, but the tools are increasingly accessible. The differentiator is not the technology itself; it is the discipline to build and maintain systems that actually serve your practice rather than just generating activity.
One note on confidentiality: at the Foundation level, lawyers are often feeding client information into public AI platforms without fully understanding where that information goes. At the Intermediate level, the question of what happens to your data becomes urgent. The tools become more powerful precisely because they are processing more sensitive material — and the platform your lawyers chose determines what happens to it, not their intentions.
The Advanced Level: AI as a Research and Strategy Partner
At the Advanced level, AI stops functioning as a drafting assistant and starts contributing to how you think about a case or a deal. The defining characteristic is that AI is doing substantive analytical work — the kind that a junior partner used to spend days on.
In litigation, this might mean AI analysis of every published opinion from a specific judge to identify patterns in her evidentiary rulings, her receptiveness to particular arguments, or her characteristic questions at oral argument. In transactional work, it might mean AI review of an entire contract portfolio to identify risk concentrations before a deal closes. The work product is not a finished brief — it is analysis that sharpens strategy before the human lawyers take over.
Advanced-level AI can also do things no human researcher could do cost-effectively. Comprehensive settlement valuation modeling — drawing on outcomes across hundreds of comparable cases, controlling for jurisdiction, judge, case type, and procedural posture — is possible at this level. So is monitoring regulatory developments across multiple agencies and jurisdictions on a client’s behalf in real time.
The confidentiality stakes are highest here. You are feeding client-specific, matter-specific information into AI systems, and whether that information stays protected depends entirely on whose platform you are using and what their terms say. As we covered in the earlier post on the Heppner decision, a federal court will read those terms — and use them against your client if the platform reserves the right to retain and train on user data.
The Expert Level: Proprietary AI Built for Your Practice
This is where meaningful differentiation begins. The Expert level is not a more sophisticated use of existing platforms — it is the construction of tools that did not exist before you built them.
How do firms get here? Typically, not by accident. At Silver Cain, the path ran through a serious investment in understanding what the technology could actually do. Last May, I presented a seminar at the DRI Retail and Hospitality Litigation Conference in Chicago titled “Harnessing AI: Transforming Strategies, Efficiency, and Outcomes.” My co-presenters were James Church and John Church, who along with Scott Hines were the founders and developers of LegalWise.AI, and advanced early legal AI platform built for analysis and legal reasoning on extremely large collections of evidence.
Organizing that seminar meant spending serious time with people who did not just use AI — they helped build it. That collaboration has continued as Mr. Hines and I work together building AI tools designed specifically for litigation and transactional work. What started as experimentation has become something that functions differently from anything available off the shelf.
Here is what that looks like, or will look like, in our practice as we continue to develop our proprietary AI programs:
-AI system that drafts complex patent applications — including complete technical drawings — in approximately two hours. The applications go directly into substantive attorney review rather than starting from scratch. The time and cost savings for clients are structural, not marginal.
-AI models for oral argument preparation that research a specific judge’s complete history — published opinions, oral argument transcripts, characteristic questions, areas of skepticism — and use that material to model what the actual argument is likely to look like. The result is preparation genuinely tailored to the specific judge before whom you will stand, not a generic appellate simulation.
-AI tools for jury selection that analyze prospective jurors' publicly available information — social media, professional background, litigation history, prior jury service — to identify patterns and flag concerns before voir dire. The analysis happens before counsel sets foot in the courtroom.
-AI systems for expert witness preparation that compile and analyze an expert’s complete prior testimony across cases, identifying inconsistencies, patterns in how the expert characterizes disputed issues, and areas where prior statements diverge from current opinions. That preparation used to require weeks of associate research. It now takes hours.
-AI-driven mock adversaries for trial preparation — systems that model opposing counsel’s litigation strategy based on actual case history and generate realistic cross-examination and argument for practice sessions. The opposing side at your moot court is not someone’s best guess at what the other lawyer might do. It is modeled on what that lawyer has actually done.
I wrote about a real-world version of this experiment in an earlier post in this series. In March 2026, I tried a dram shop case at a DRI mock trial in Nashville before a live jury of experienced trial lawyers. At the same time, an AI “jury” run by ViewPoints AI — a system using thousands of synthetic juror profiles built from psychological research and demographic data — evaluated the same case from pre-trial scripts. The live jury voted 8–3 in favor of the defense; the AI jury produced a modest majority for the defense as well. But the more instructive comparison was in the divergence: The AI evaluated the case we planned to try. The humans evaluated the case we actually tried.
That gap is the honest answer to what AI can and cannot do at even the Expert level. AI jury simulation is a genuinely valuable pre-trial tool. It stress-tests themes, broadens our view of how different juror profiles might respond, and forces precise articulation of the theory before trial. What it cannot yet replicate is the dynamic courtroom — the improvised cross-examination question, the witness’s off-script answer, the visible juror reaction to a particular fact. Those still belong to the human column. As I noted in that post: AI tool are a supplement to trial preparation, not a substitute for what happens in the room.
What distinguishes Expert-level work overall is not technical sophistication alone. It is that the tools are proprietary, that the methodology took significant time and domain expertise to develop, and that the institutional knowledge embedded in the tools cannot be transferred quickly even to someone who fully understands the general approach. The gap between knowing the concept and having the working system remains wide.
What Clients Should Ask
Not every engagement requires Expert-level AI, and a firm using AI at the Foundation level is not automatically failing its clients. The right question is not which level is best in the abstract — it is whether the level your lawyers are operating at is appropriate for your matter and your budget, and whether you know what you are getting.
Here are the questions worth putting to any firm that says it uses AI:
-At what level are you actually operating? Ask for specifics. A firm that uses AI seriously should be able to describe what tools it uses, what those tools do, and where the human review happens. Vague references to “AI-enhanced workflows” should prompt follow-up.
-Who built the tools, and does the firm own them? Using a third-party platform and building your own are fundamentally different propositions with different implications for quality, customization, data protection, and what happens if the platform changes its terms or gets acquired. A firm that has built proprietary tools has made a real commitment; a firm using shared platforms has not.
-What happens to our information? At the Foundation level, your confidential information may be feeding a public AI’s training data. At the Expert level, a properly built proprietary system means your data never leaves a controlled environment. The platform your lawyers chose determines the answer to this question — not their intentions, and not what their engagement letter says. As recent case law has made clear, a court evaluating a privilege claim will read the AI platform’s terms of service and draw its own conclusions.
-Does AI use translate into cost savings for me? If an AI tool reduces a task from eight hours to two, the question is whether you are billed for two hours or eight. Some firms pass the efficiency savings to clients; others bill for the work product regardless of how long it took. This is a reasonable question to ask directly and a reasonable expectation to address in the engagement letter.
-How do you verify AI output? AI systems at every level produce errors. What matters is whether the verification process is systematic or ad hoc. A firm with a serious AI practice should be able to describe its review protocol — who reviews, at what stage, and what standards the output has to meet before it reaches you.
-Have these tools been tested in actual litigation or transactions? There is a significant difference between a tool that has been used in live matters and one that has only been tested in controlled conditions. Ask whether the firm’s AI tools have a track record, and whether anyone can speak to the results.
-Does your AI use vary by practice area or matter type? A firm’s AI capabilities may be strong in one practice area and minimal in another. If your matter involves specialized litigation, regulatory work, or a particular industry, it is worth asking whether the AI tools in use were built or configured for that specific context or are being used generally.
-How are you staying current? AI capability is changing fast, and the ethical and legal landscape is changing with it. Bar associations are issuing guidance. Courts are imposing requirements. A firm that takes this seriously should be actively tracking both the technology and the rules governing its use. Ask how they are doing that.
The Window
The in-house counsel who called me was asking, really, whether the firms she hires have already made the investment — or whether they are managing their billable hour exposure until the AI disruption somebody else is talking about becomes too obvious to ignore.
The honest answer is that the window for building is open but not permanently so. The firms building seriously now will have refined tools, trained workflows, and institutional knowledge that will be difficult to replicate even as the underlying technology becomes more accessible. The firms waiting will be catching up to a target that is still moving.
The question is not whether AI will change legal practice. That question has been settled. The question is whether your lawyers are building the future of their practice — or waiting to be handed one.
If any of this raises questions about how your firm — or the firms you retain — is approaching AI, we are glad to have that conversation.
A Word About SilverCain
Silver Cain PLC was founded on the premise that businesses deserve both exceptional litigation experience and direct partner access — and that you should not have to choose between them. Leon Silver and Rebecca Cain have spent decades handling the most complex business and real estate disputes in Arizona and nationally. If you are evaluating counsel in Phoenix, we welcome the conversation.

