New Court Rules For AI: Stop The Insanity
Regulating AI use by lawyers requires rationality, not red tape.
Welcome to Original Jurisdiction, the latest legal publication by me, David Lat. You can learn more about Original Jurisdiction by reading its About page, and you can email me at davidlat@substack.com. This is a reader-supported publication; you can subscribe by clicking here. Thanks!
A version of this article originally appeared on Bloomberg Law, part of Bloomberg Industry Group, Inc. (800-372-1033), and is reproduced here with permission. The footnotes contain material that did not appear in the Bloomberg Law version of the piece (which is subject to a length limit). You can think of the footnotes as a form of “bonus content” for Original Jurisdiction subscribers.
In his 2023 year-end report on the federal judiciary, Chief Justice John Roberts highlighted the promise and perils of artificial intelligence for the legal system. “AI obviously has great potential to dramatically increase access to key information for lawyers and non-lawyers alike,” he wrote. “But just as obviously it risks invading privacy interests and dehumanizing the law.”1
At this early stage, lawyers and judges have only a limited sense of how AI will affect the practice of law or the enterprise of judging. It would therefore be wise to pause and gather more data before taking action. But some judges are falling all over themselves to create AI-specific orders, rules, and disclosure requirements.
At least 21 federal trial judges have already issued standing orders regarding AI, according to Bloomberg Law. The Fifth Circuit is considering a proposal that would require attorneys to confirm that they checked the accuracy of AI-generated material. The Ninth Circuit has created an AI committee that could end up proposing AI-related rules, and so has the Third Circuit. State courts are convening AI committees as well.
I have a simple message for judges who are thinking about adopting AI-specific orders and rules.
Just. Say. No.2
One can understand why judges have felt the need to take action. AI is dominating everything from newspaper headlines to cocktail-party chatter. There’s a reason why Chief Justice Roberts made it a focus of his annual report.
And stories about “AI fails” by lawyers have gone viral, such as the tale of two Manhattan lawyers who filed a brief in federal court that cited nonexistent cases generated—or “hallucinated,” to use the technical term—by ChatGPT. But do a few highly unusual fiascos, involving a small number of attorneys, justify imposing additional rules and requirements on all lawyers?
“ChatGPT, and AI in general, can be misused,” acknowledged Ross Guberman, CEO of BriefCatch, which produces legal-editing software that includes AI features. “But I’m concerned that some courts are issuing sweeping anti-AI rules based on anecdotes.”
I see several problems with judges saddling lawyers with AI-specific rules and requirements. First and foremost, they’re simply not necessary.
“Any such rules are redundant given lawyers’ existing responsibilities to ensure the accuracy of their court filings,” said lawyer and legal journalist Bob Ambrogi, publisher of the legal-technology blog LawSites. “Rule 11 in the federal courts and similar state rules require lawyers to certify the factual and legal accuracy of their pleadings. Professional responsibility rules impose similar requirements. To create a new ‘accuracy’ rule specifically related to the use of AI is unnecessary.”
AI is just another tool in a lawyer’s toolkit. If lawyers build something defective with their tools, that’s the fault of the lawyers—which is why existing rules target lawyers, not the specific tools they use.
“A lawyer’s use of AI to assist in drafting is no different than the use of an associate or of legal editing software,” Ambrogi told me. “No matter how the draft was prepared, the lawyer is ultimately responsible for its contents. When a lawyer submits a filing with fictitious or erroneous citations, the fault is the lawyer’s, not the technology’s.”3
Or as lawyer and legal commentator Carolyn Elefant put it, “GenAI is nothing more than the canary in the coal mine. The toxic uses are all completely human.” Like a canary in a coal mine, an AI disaster points to another problem: incompetent or unethical lawyers. There’s nothing wrong with the canary.
“In every case so far of filings containing hallucinated cases, the fault was either in a lawyer who did not bother to even check the cases or a self-represented litigant who did not know better,” Ambrogi added. “This is simply bad lawyering—and bad lawyering that existing rules are more than sufficient to address, as sanction awards have already demonstrated.”
Exhibit A: the lawyers from the ChatGPT debacle. Judge P. Kevin Castel (S.D.N.Y.) sanctioned them using good old-fashioned Rule 11—no fancy new AI rule needed.
Second, AI-targeting rules and requirements carry significant costs. In addition to forcing lawyers to spend (or waste) time and money on compliance, they send a negative message about AI that could discourage attorneys from exploring the many positive uses for AI.
As Elefant wrote of one judge’s standing order on AI, it “isn’t just duplicative, but dangerous.” Such orders have the potential to “stymie innovation and scare lawyers from using a powerful tool.”4
New AI rules “might seem harmless, but lawyers take court pronouncements seriously,” Guberman of BriefCatch said. “The risk is that the entire profession will miss out on AI’s vast potential for enhancing both lawyering and access to justice.”
Third, rules aimed at AI create tension between clients and their outside law firms, according to Alex Su of Ironclad, an AI-powered contracts software company.
How so? Corporate legal departments are under pressure from CEOs and other top executives to leverage AI—which is why in-house lawyers are embracing AI much more quickly than law firms. But when they encourage their outside counsel to use AI solutions for efficiency and cost savings, they often get pushback—partly because lawyers at firms, who deal more directly with courts and judges than their clients, are getting negative messaging about AI from the judiciary.
“Law firms are already conservative about AI, and understandably so,” Su told me. “But when courts impose all these new rules and disclosures, they push things a bit too far.”5
Finally, if we must have AI-specific rules, it would be nice to have uniformity.
“While I do not believe any new rules are needed, I am also concerned about the Babel-like approach we’re seeing so far, of individual courts adopting their own rules,” Ambrogi said. “If there is to be rulemaking around this, it should be done in a uniform and deliberative manner.”
Lawyers having to comply with a welter of competing, inconsistent rules counsels in favor of individual judges holding off on AI-specific requirements for now. Instead, jurists concerned about AI should advocate within the judiciary for a broader, coordinated response.
As Chief Justice Roberts’s year-end report made clear, AI isn’t going anywhere. There will be plenty of time to develop new rules if necessary, after lawyers and judges have a better sense of the actual problems and pitfalls.
In the Chief Justice’s words, using AI “requires caution and humility.” And judicial efforts to regulate the use of AI by lawyers require caution and humility as well.
Thanks for reading Original Jurisdiction, and thanks to my paid subscribers for making this publication possible. Subscribers get (1) access to Judicial Notice, my time-saving weekly roundup of the most notable news in the legal world; (2) additional stories reserved for paid subscribers; and (3) the ability to comment on posts. You can email me at davidlat@substack.com with questions or comments, and you can share this post or subscribe using the buttons below.
Most news reports about the Chief Justice’s report focused on his AI discussion, but it’s actually a broader, interesting history of the use of technology by the federal judiciary. It’s short, only seven pages (excluding the appendix about caseloads), and worth a quick read.
Yes, my fellow ‘80s kids, this is a Nancy Reagan shoutout.
In other words, according to Ambrogi, the new AI rules “conflate technology with competence.”
For more from Carolyn Elefant, check out her extensive comments on the proposed Fifth Circuit that would require disclosure of AI use and human verification of sources. Her main points: (1) “The proposed rule unfairly targets AI-generated research even though the problem of inaccurate citation long predates AI,” (2) “The proposed rule is impossible to implement without undue burden on filers,” and (3) “Mandating disclosure of AI tools undermines the work-product privilege.”
Another point Su made to me: what are we talking about when we talk about AI? Many traditional tools for lawyers, such as Westlaw and Lexis and Bloomberg Law, incorporate AI features. To what extent is usage of those tools subject to disclosure?
I agree that AI merely shines a spotlight on the egregiously deficient lawyering and judging that has been far too common for far too long. Even when the cited authority is real, the proposition for which it is cited too often is false. Justice Scalia (and SCOTUS) repeatedly spoke to this problem. See, e.g., Brogan v. United States, 522 U.S. 398, 400 (1998). “While communis error facit jus may be a sadly accurate description of reality,” it is not “jurisprudence.” “Courts may not create their own limitations on” any law (including the Constitution) “no matter how alluring the policy arguments for doing so, and no matter how widely the blame may be spread” among any quantity or quality of courts. Abuses of AI merely highlight how some lawyers and judges have been abusing citations for generations. Artificial Intelligence isn't the problem. The problem is a paucity of real integrity and real intelligence.
I support the additional step for AI generated filings, because there are enough instances of false AI citations to make it a non-trivial possibility for misleading a tribunal. It encourages the otherwise innocent litigant to make sure AI hasn't hallucinated. The ordinary sanctions for misleading a tribunal have not been enough to stop litigants from making false filings, however inadvertently. Signing a filing that one has done oneself, where there are meaningful direct personal sanctions is one thing - but what meaningful sanctions are there against the AI itsel?. Some research supports the importance of that additional signature as a means of preventing fraud. See, e.g., What's in a name? https://www.sciencedirect.com/science/article/abs/pii/S0022103115000979