Courts have always expected lawyers to exercise judgment about the tools they use. Lawyers have been sanctioned for relying on sloppy investigators, dubious “experts,” and forms they did not read carefully. The only real difference with AI is speed and scale: we can now make much bigger mistakes much faster, with a level of confidence that makes those mistakes harder to see.
Over the last couple of years, judges have started to say the quiet part out loud. Orders warning lawyers (and pro se litigants) about AI-generated filings are popping up across the country. Sanctions have been imposed where attorneys submitted briefs filled with hallucinated cases, cited authorities that do not exist, or misrepresented how they used AI in the first place. A federal appeals court has gone so far as to caution self-represented parties that they will not get a free pass for relying on chatbots to get their law right.
If your instinct is “that’s just a few outliers,” you may be missing the point. The pattern in these decisions is not a reaction to a specific brand of AI or a particular interface. It is a re-statement of a very old rule: as far as the court is concerned, the work is yours. The fact that a machine helped you write it is not a defense.
When Your Co-Counsel Is a Chatbot
Think about how AI is actually creeping into daily practice. A lawyer asks a tool to draft a motion to dismiss, to summarize a case, to turn bullet-point notes into a demand letter, or to suggest arguments for a sanctions response. The output looks fluent, the citations look plausible, and the pressure of the calendar is very real. It is tempting to treat the tool as a junior associate who never sleeps.
But AI is not a junior associate. It does not know when it is hallucinating a case. It does not understand that a mis-cited statute will cost your client real money. It cannot feel the weight of Rule 11. We can use it as a drafting accelerator, a brainstorming partner, or a way to break through writer’s block. We cannot delegate our duty of competence to it.
That distinction is at the heart of the sanction cases you’ve read about. In Mata v. Avianca and similar decisions, the problem was not “using AI.” The problem was failing to check whether the citations were real, failing to read what was filed with the court, and then misleading the judge about what had happened when the issue came to light. The sanctionable conduct was human, not digital.
What About Pro Se Litigants?
A related question is what recourse does a pro se litigant have if the AI tool they relied on gets them sanctioned, saddled with the other side’s attorneys’ fees, or costs them a case that might otherwise have been winnable?
The uncomfortable answer, at least under current law, is “not much.”
The new Nippon Life Insurance Company of America v. OpenAI Foundation lawsuit in the Northern District of Illinois underscores how unsettled this landscape really is. In that case, Nippon alleges that ChatGPT effectively “practiced law” without a license by encouraging a former disability claimant to try to reopen a fully settled case, drafting her motions, and generating legal arguments that led to a flood of meritless filings. Nippon is not a pro se litigant; it is an institutional plaintiff seeking damages and an injunction. But the fact that a sophisticated repeat player is now asking a federal court to treat a consumer-facing AI tool as an unlicensed lawyer shows that the “no recourse” assumption is starting to be tested in real litigation, not just in law review articles.
Most publicly available AI tools come wrapped in terms of service and disclaimers that say, in substance:
- This is not legal advice.
- We do not guarantee accuracy.
- You are responsible for how you use the output.
That is not just boilerplate. It is the platform’s way of telling the user, and any court that might later look at the relationship, that there is no attorney-client relationship, no duty to give correct legal advice, and no promise that the tool’s answers are fit for any particular purpose.
Could a harmed pro se litigant try to sue anyway? Of course. They might allege negligence, misrepresentation, unfair consumer practices, or product liability. But each of those theories runs into the same brick wall: absent a special relationship or specific false statement by the provider, it is very hard to turn “I used a free (or cheap) AI tool that told me the wrong law” into a legal claim that survives a motion to dismiss.
In other words, the court is likely to treat the tool the same way it treats a bad Google search or an out-of-date library book: a resource the litigant chose to rely on at their own risk. The primary “recourse” remains the appellate and post-judgment procedures that already exist, not a separate claim against the AI company.
For now, Nippon is an outlier and a live dispute, not a roadmap. Until courts actually hold an AI provider liable for unauthorized practice, negligence, or something similar, the safest working assumption remains that judges will see these tools as powerful typewriters, not as accountable counsel. That makes it even more important for courts, bar regulators, and practicing lawyers to be clear with self-represented parties, and to understand themselves, that “my chatbot told me to” will not undo sanctions, fee awards, or adverse judgments.
The Emerging Standard: Sanction-Proof AI Use
So what does it mean, in practical terms, to use AI in a way that is “sanction-proof”? No checklist can eliminate all risk, but a few principles are starting to crystallize from the case law and commentary.
- Own the work product.
If your name is on the signature block, you should be able to answer three basic questions about any AI-touched filing:- What prompts or instructions did you give the tool?
- How did you verify the output, especially the citations and factual statements?
- What independent research and judgment did you apply before filing?
If you cannot articulate that, you are not supervising the tool; the tool is supervising you.
- Treat AI output as a first draft, never as authority.
AI can propose cases, but it cannot be the case law. Every authority that appears in a brief should be independently confirmed through your usual research channels. That means pulling the cases, reading them, and ensuring they say what the brief claims they say. It is not enough to see a plausible party name and citation format in the footnote and assume it must be real. - Disclose when courts require it—and be careful when they don’t.
Some judges and courts now require certification that AI was not used, or that any AI use was carefully checked. Others simply warn that AI-generated submissions must comply with existing rules. Where disclosure is required, follow the order to the letter. Where it is not, think strategically about whether disclosing your process enhances your credibility or just invites unnecessary side fights. But remember this: misleading the court about your use of AI, or staying silent when the court has expressly asked, is far more dangerous than admitting you used a tool and explaining how you verified the work. - Build AI checks into your internal process.
If your firm is going to use AI at all, it should not be happening in an ad hoc way at the associate’s keyboard. You need a written policy and a workflow that bakes in verification:- Which tasks are AI-eligible (e.g., idea generation, style rewrites) and which are not (e.g., final research memos, citations)?
- Who is responsible for checking AI-influenced work before it goes out the door?
- How are you training staff and lawyers to recognize hallucinations and limitations?
Judges are increasingly explicit that they expect firm-level controls, not just individual good intentions.
- Update engagement letters and client conversations.
Clients are experimenting with AI on their own. Some will come to you with AI-drafted complaints, letters, or “research” they want you to adopt. Others will assume you are using AI and expect you to pass along any cost savings. Either way, your engagement letters and early conversations should set expectations:- You may use AI as an internal tool, but you remain responsible for the work and will not blindly rely on it.
- You do not advise clients to rely on AI tools on their own as a substitute for legal counsel.
- If a client insists on a course of action driven by AI “advice,” you reserve the right to withdraw or to document that you advised against it.
This is not only about managing risk; it is about preserving the human judgment the client is actually paying for.
Competence in the Age of Machines
In the end, “AI, sanctions, and the new standard of competence” is not a story about software. It is a story about professional responsibility under new conditions. The rules have not really changed: investigate, verify, tell the truth, and own your filings. The difference is that we now have a tool that can produce authoritative-sounding legal language on demand, with no awareness of consequence.
For pro se litigants, that tool is seductive—and, when it goes wrong, there is little recourse beyond the existing appellate process. For lawyers, it is powerful—but the responsibility remains exactly where it has always been: with us. The courts are not warning against AI because they hate technology. They are warning us not to forget who is actually practicing law.

