In July 2019 I lead a roundtable discussion on Artificial Intelligence in Legal Research and Law Practice at the American Association of Law Librarians (AALL) which took place in Washington D.C. I was grateful for the invitation from @robtruman, the law librarian at the Lewis & Clark Law School because the event forced me to review all of the posts on AI and law practice that I’ve been meaning to read and because any opportunity to talk about AI – which is the work that my husband studied back in grad school in the late ‘80s before the subject was ready for prime time – is always a privilege.
In this post, I’ll share some of the information in AI that I gathered in preparation for my talk. One of MyShingle’s missions has always been to ensure that solo and small firms have current information not just on new technology developments but also on how those new tools can be applied in practice. And because AI is such a fast-moving target that many solo and small firm lawyers haven’t yet had a chance to wrap their heads around, I’ve written a multi-part post that will cover everything that solo and small firm lawyers need to know.
This post is organized as a series as follows. In Part I, I’ll provide an explanation of general AI concepts – which will prove important in understanding the current state of technology. Part II summarizes much of the content that I covered in my AALL presentation, including AI tools of interest to solos and smalls. With new applications launching left and right and limited transparency about what’s actually under the hood, solo and small firm lawyers face the risk of either not having learned about a tool that could help their practice, or worse, employing a tool that produces inaccurate results. And since that’s an ongoing problem that can’t be addressed even in a multi-part post, in Part III, I’ll make the case for why today’s law librarians are uniquely qualified to assist solo and small firm lawyers in staying abreast of AI technologies in a way that vendors or the bars cannot.
Part I: The Basics – What is AI?
Let me begin this series on AI in Legal Research and Law Practice with a foundational discussion of what artificial intelligence is and why it’s such a hot topic right now.
John McCarthy, a computer scientist credited as the father of artificial intelligence defined AI is “the science and engineering of making intelligent machines” – in other words, machines that can perform functions traditionally associated with humans, such as reasoning, recognizing patterns and drawing inferences.
Though AI research began around the mid-1950s, it attained more widespread use within the past few years due to other technology advances and the growing aggregation of big data which helps train the algorithms that power AI programs.
Today, there are two types of AI – strong and weak. The distinction is best explained in excellent presentation by Harry Surden on AI and the Law. Surden explains that strong AI involves computers thinking at a level that meets or surpasses human logic – and that there is no evidence that any programs have reached this point. However, there’s a second type of AI known as “weak” pattern-based AI where computers solve problems by detecting patterns and which has been used to automate many processes from language translation to self-driving vehicles to email sorting.
In addition to strong and weak AI, Surden identifies two predominant AI techniques: (1) logic and rules-based engines and and (2) machine learning. A logic or rules-based approach entails developing rules for a computer through use of subject matter experts that can be used to automate processes. Surden offers Turbotax as one example of a rules-based approach. Another specific to legal is Neota Logic, a cool tool that I saw in practice years back at the Georgetown Law School Iron Tech Lawyer Competitions. Neota Logic doesn’t require programming knowledge so lawyers can use it to create tools that will provide answers to clients on wide-ranging topics from eligibility for expungement of criminal records to whether a data breach must be reported.
A second AI technique is machine learning through pattern-recognition where algorithms discern patterns in data and infer rules on their own. As the machine learns from the data, the tools improve over time. Netflix’ automated recommendations or Google’s email system for identifying spam rely on AI-based pattern recognition. In terms of legal applications, machine learning and pattern recognition power tools for contract review and predictive coding in e-discovery to identify responsive documents.
Data analytics is best described as a closely related cousin to AI – but with important differences described here and here. In simple terms, data analytics culls through mountains of data to offer observations on what happened in the past – such as how long a judge typically takes to rule on a summary judgment motion. Data analytics cannot predict the future – though assumptions based on past data can be used to inform or predict future conduct. AI adds another layer through use of pattern recognition or machine learning to analyze and make assumptions or identify patterns.
Whether an AI system employs a rules-based engine or pattern recognition, it’s fairly easy to imagine the potential for bias or inaccuracy. With a rules-based system, omitting a critical step or inaccurately applying the rules could lead to errors. For example, imagine a rules-based system includes a rule stating that the statute of limitations for filing a personal injury action is two years from the incident date, but fails to qualify the rule with a caveat for cases involving municipalities where a notice of claim must be filed within six months of the incident or the claim is forfeited. A lawyer or client applying this rules-based engine to evaluate a case might identify a cause of action but nevertheless miss the filing deadline for a claim for cases involving municipalities.
Rules-based systems may also make assumptions that reflect bias. This has been a widespread problem in criminal sentencing where courts may rely on a risk forecaster tools to predict the likelihood of a defendant’s recidivism and then use that score to inform sentencing decisions. Not surprisingly, a study sponsored by Pro Publica found that not only were the results of forecasting “remarkably unreliable” in forecasting future violent crimes, but they also falsely flagged black defendants as future perpetrators in twice as many cases as white defendants.
Pattern recognition systems can also be challenging. For starters, they require a large data set to achieve accuracy – something that isn’t necessarily available to many solo and small law firms. The accuracy of pattern recognition also depends upon whether the data used is “clean” and whether training data provided fully represent the scope of the problem.
The purpose of describing some of the challenges to creating accurate AI systems is not to discourage their use but instead to highlight for the lawyers the importance of understanding what’s going on under the hood. I’ll return to this subject in Part III when discussing the role of law librarians because they are uniquely positioned to identify many of these problems. With this background in place, let’s move on to Part II.
Editor’s Note: This series of articles is republished with the author’s permission with first publication on her blog, myshingle.com