Welcome to Original Jurisdiction, the latest legal publication by me, David Lat. You can learn more about Original Jurisdiction by reading its About page, and you can email me at davidlat@substack.com. This is a reader-supported publication; you can subscribe by clicking here.
We’re all familiar with the infamous tale of the lawyers who filed a brief full of nonexistent cases—courtesy of ChatGPT, the AI tool that made up aka “hallucinated” the fake citations. In the end, Judge Kevin Castel (S.D.N.Y.) sanctioned the attorneys, to the tune of $5,000—but the national notoriety was surely far worse.
The offending lawyers, Steven Schwartz and Peter LoDuca, worked at a tiny New York law firm by the name of Levidow, Levidow & Oberman. And it seems that their screw-up stemmed in part from resource constraints, which small firms frequently struggle with. As they explained to Judge Castel at the sanctions hearing, at the time their firm did not have access to Westlaw or LexisNexis—which are, as we all know, extremely expensive—and the type of subscription they had to Fastcase did not provide them with full access to federal cases.
But what about lawyers who work for one of the nation’s largest law firms? They shouldn’t have any excuse, right?
Whether they have an excuse or not, it appears that they too can make the same mistake. Yesterday, Judge Kelly Rankin of the District of Wyoming issued an order to show cause in Wadsworth v. Walmart Inc. (emphasis in the original):
This matter is before the Court on its own notice. On January 22, 2025, Plaintiffs filed their Motions in Limine. [ECF No. 141]. Therein, Plaintiffs cited nine total cases:
1. Wyoming v. U.S. Department of Energy, 2006 WL 3801910 (D. Wyo. 2006);
2. Holland v. Keller, 2018 WL 2446162 (D. Wyo. 2018);
3. United States v. Hargrove, 2019 WL 2516279 (D. Wyo. 2019);
4. Meyer v. City of Cheyenne, 2017 WL 3461055 (D. Wyo. 2017);
5. U.S. v. Caraway, 534 F.3d 1290 (10th Cir. 2008);
6. Benson v. State of Wyoming, 2010 WL 4683851 (D. Wyo. 2010);
7. Smith v. United States, 2011 WL 2160468 (D. Wyo. 2011);
8. Woods v. BNSF Railway Co., 2016 WL 165971 (D. Wyo. 2016); and
9. Fitzgerald v. City of New York, 2018 WL 3037217 (S.D.N.Y. 2018).
See [ECF No. 141].
The problem with these cases is that none exist, except United States v. Caraway, 534 F.3d 1290 (10th Cir. 2008). The cases are not identifiable by their Westlaw cite, and the Court cannot locate the District of Wyoming cases by their case name in its local Electronic Court Filing System. Defendants aver through counsel that “at least some of these mis-cited cases can be found on ChatGPT.” [ECF No. 150] (providing a picture of ChatGPT locating “Meyer v. City of Cheyenne” through the fake Westlaw identifier).
As you might expect, Judge Rankin is… not pleased:
When confronted with similar situations, courts have ordered the filing attorneys to show cause why sanctions or discipline should not issue. Mata v. Avianca, Inc., No. 22-CV-1461 (PKC), 2023 WL 3696209 (S.D.N.Y. May 4, 2023); United States v. Hayes, No. 2:24-CR-0280-DJC, 2024 WL 5125812 (E.D. Cal. Dec. 16, 2024); United States v. Cohen, No. 18-CR-602 (JMF), 2023 WL 8635521 (S.D.N.Y. Dec. 12, 2023). Accordingly, the Court orders as follows:
IT IS ORDERED that at least one of the three attorneys shall provide a true and accurate copy of all cases used in support of [ECF No. 141], except for United States v. Caraway, 534 F.3d 1290 (10th Cir. 2008), no later than 12:00 PM, Mountain Standard Time, on February 10, 2025.
And if they can’t provide the cases in question, the lawyers “shall separately show cause in writing why he or she should not be sanctioned pursuant to: (1) Fed. R. Civ. P. 11(b), (c); (2) 28 U.S.C. § 1927; and (3) the inherent power of the Court to order sanctions for citing non-existent cases to the Court.” And this written submission, due on February 13, “shall take the form of a sworn declaration” that contains “a thorough explanation for how the motion and fake cases were generated,” as well as an explanation from each lawyer of “their role in drafting or supervising the motion.”
Who are the lawyers behind this apparent snafu? They’re called out by name on page three of the order:
The three undersigned counsel to [ECF No. 141] are:
Mr. Rudwin Ayala;
Ms. Taly Goody; and
As you can see from the signatures on the offending motions in limine, Taly Goody works at Goody Law Group, a California-based firm that appears to have three lawyers. But Rudwin Ayala and Michael Morgan work at the giant Morgan and Morgan, which describes itself on its website as “America’s largest injury law firm™”—and is the #42 firm in the country based on headcount, according to The American Lawyer.
Moral of the story: lawyers at large firms can misuse ChatGPT as well as anyone. And although Morgan and Morgan is a plaintiff’s firm—which might cause snobby attorneys at big defense firms to say, with a touch of hauteur, “Of course it is”—I think it’s only a matter of time before a defense-side, Am Law 100 firm makes a similar misstep in a public filing.
These “lawyers engage in ChatGPT fail” stories tend to be popular with readers, which is one reason why I’ve written this one—but I don’t want to exaggerate their significance. As I said to Bridget McCormack and Zach Abramowitz on the AAAi Podcast, “ChatGPT doesn’t engage in these screw-ups; humans improperly using ChatGPT engage in these screw-ups.” But the stories still go viral sometimes because they have a certain novelty value: AI is, at least in the world of legal practice, still (relatively) new.
The danger, however, is that the “ChatGPT fail” stories could have a chilling effect, in terms of deterring lawyers from (responsibly) exploring how AI and other transformative technologies can help them serve their clients more efficiently and effectively. As McCormack said on the AAAi podcast after I mentioned the S.D.N.Y. debacle, “I’m still mad at that one Southern District of New York lawyer because I feel like he set the whole profession back by two years. I’m literally so mad at that dude.”
I reached out to Ayala, Goody, and Morgan by email, but have not heard back yet; if and when I do, I’ll update this post. Otherwise, tune in next week, when they’ll file their responses to the order to show cause.
And in the meantime, if you rely on ChatGPT or another AI tool for legal research, please, please use an actual legal-research platform to confirm that (1) the cases exist and (2) you’ve cited them accurately. That’s not too much to ask, right?
UPDATE (5:21 p.m.): If you'd like to educate yourself about how you can leverage AI responsibly to better serve your clients, check out these excellent resources from Hotshot—and you can even get CLE credit for it (depending on your jurisdiction).
UPDATE (2/8/2025, 12:48 a.m.): The three lawyers representing the plaintiffs in Wadsworth v. Walmart withdrew their motions in limine on Friday night, as reported by Law360—so presumably they know there are problems with them. But the attorneys have not yet explained why the motions contain what appear to be fabricated authorities.
The Law360 article also contains some details about the lawsuit itself: “The underlying litigation was filed in July 2023 by Stephanie and Matthew Wadsworth on behalf of their four minor children after they allege that a ‘defective and unreasonably dangerous hoverboard’ exploded and caught fire in their home. The family alleges that the product, a Jetson Plasma Iridescent Hoverboard, is defective, hazardous and malfunctioned when it was being used in the intended manner.”
Thanks for reading Original Jurisdiction, and thanks to my paid subscribers for making this publication possible. Subscribers get (1) access to Judicial Notice, my time-saving weekly roundup of the most notable news in the legal world; (2) additional stories reserved for paid subscribers; (3) transcripts of podcast interviews; and (4) the ability to comment on posts. You can email me at davidlat@substack.com with questions or comments, and you can share this post or subscribe using the buttons below.
When I was in law school 48 years ago, I worked in the law library and taught other students how to use Westlaw (and maybe Lexis). This was way before PCs, and so the platforms were run on their own equipment. At that time, I thought about writing a science fiction story about a sinister organization (I was thinking of a litigious religious cult from my hometown) that infiltrated West and changed decisions in subtle ways that favored its long-term litigation strategy.
It was necessary that someone from the organization be an employee of the company because, at that time, there was no way to hack Westlaw's computers, and not being Gibson or Stephenson, I did not anticipate that someday any middle school student sitting in their bedroom could do it. Also, the flaw in my idea was that all the decisions were in books that everyone could read so the fraud would easily be uncovered. So, I decided not to write the article (I also had no talent for writing fiction, which was another reason).
I think this remains a greater threat (more so after I found out that justices could alter their opinions months after they were made public--the ultimate insiders). One would hope that the consequences of not checking one's citations would incentivize lawyers to be more careful and that such cases would remain outliers, but there will be instances where decisions will be made based upon cases that are entirely fictional. Lawyers are overworked, Judges are overworked, and I am afraid that as each generation becomes more reliant on systems susceptible to manipulation, the law will become more unreliable.
Or maybe our emerging AI Overlords will surprise us with their greater sense of Ethics and manipulate decisions to bring greater justice to their creators.