One reason artificial intelligence is a hot topic in law: When attorneys miss precedents, the stakes are high.
How good are you at legal research? However good you may think you are, the judges you litigate before likely have a different perception.
Recent research conducted by Casetext uncovered that judges have a surprisingly consistent opinion of the work they see from us litigators: they believe attorneys miss important cases often, and when they do, it has real consequences in the course of a litigation.
A survey of over 100 federal and state judges revealed some pretty shocking statistics. First, every single judge we surveyed said that they or their clerks have discovered relevant precedent that the parties before them missed. Over a quarter of judges (27%) said that they or their clerks catch missing precedents the attorneys should have cited “most of the time” or “almost always.” The vast majority of judges (83%) say that they see this problem at least some of the time. A small minority (16%) say they rarely, but still sometimes, catch litigators missing cases they should have cited in their submissions. And again, literally not one of the more than 100 judges surveyed said this wasn’t a problem at all. (If we limited our survey to federal judges only, the stats would be even worse.)
There are real implications of attorneys missing precedents. More than two-thirds of judges surveyed said that attorneys missing cases before them has materially impacted the outcome of a motion or proceeding.
A few judges expanded on the issue of missing cases in written responses to us. A judge in the Southern District of New York recounted a criminal case where the government attorneys, all well-meaning and effective lawyers, missed a critical controlling case in an evidentiary issue which ended up being the dispositive precedent in the motion, which controlled the outcome in the case and led to acquittal. The judge and his clerks found the key case themselves the night before motion in limine arguments, and the government was unable to distinguish it at that stage. “Even excellent lawyers miss things, and sometimes it materially impacts the outcome,” said the judge.
Another judge gives the overall quality of research he sees a “C+,” with a handful of remarkably researched briefs but “very few A+’s.” In an American Judges Association blog post from a year ago, Minnesota Judge Kevin Burke notes that sometimes even briefs that seem “facially plausible” are, upon further research by the judge and their clerks, sometimes “contrived of BS.”
So, you probably get the picture. The average researcher isn’t doing great according to the one group of people for whom it really matters: the judges.
I believe this is one of the less talked-about but incredibly important reasons that attorneys are turning to new technologies like artificial intelligence to help them do age-old tasks like legal research. According to judges, the profession is averaging a pretty mediocre grade – even for excellent lawyers. The task is hard, we’re often under intense time pressure and the legacy tools attorneys are accustomed to using in their research do not solve for those problems. We all know on some level that we could be doing it better – and technology can help.
We’ve seen this outside the legal research context already: LawGeex, an artificial intelligence contract review company, ran a study that showed that human contract reviewers often performed worse on some tasks than an automated review by their program. Specifically, the study revealed that LawGeex has achieved a 94% average accuracy rate at surfacing risks in non-disclosure agreements (NDAs), one of the most common legal agreements used in business (compared to an average of 85% for experienced lawyers). We’ve run similar studies focused on legal research and will be reporting on those results publicly soon. But we’ve also seen quite a few anecdotes that indicate A.I.’s impact on legal research is very similar to its impact on contract review.
For example, last year a lawyer in a high-stakes banking case before the Ninth Circuit filed a motion to bring to the court’s attention significant precedents that were missing from his brief because the attorney only first began using Casetext’s proprietary artificial intelligence algorithm after all the briefs were filed.
Click here to see the full text document.
Not only are attorneys turning to new technologies to overcome the inherent limitations of legal research, but so are the judges themselves. In the American Judges Association blog post mentioned above, Judge Burke explains why judges are starting to use artificial intelligence enhanced legal research – specifically, Casetext’s CARA A.I. technology – which is freely available to the judiciary: “In a perfect world, litigants would cite to all relevant case law in their briefs. In the real world, litigants often do not. A new research tool, CARA, can help judges and their clerks quickly find important case law that the parties may have overlooked.”
This raises an important issue: You are likely missing more relevant precedent than you think, and judges are not only noticing, they are starting to use advanced A.I. technology to find the cases you’re missing. This, more than anything else, just might be the catalyst to motivate more lawyers to start grading themselves – and rethink their research strategy.
Editor’s Note: This article is published with permission of the author with first publication on Above the Law.