Class was humming right along as we worked through some Advanced Legal Research small group exercises. I was excited to be in the same room with these upper level law students, seeing them work in groups of two or three, while I walked around, answering questions and offering tips. It had been way too long since I had been in a classroom situation like this.
At the end of the table a hand is raised, I walk over, and the two students show me how they are both using the same online legal research service, the same search string in the same selected jurisdiction and content. Why did they have different sets of results?
There it was – right in front of us – how problematic natural language searches, algorithms and AI are.
Having been away from law libraries for several years and working in general undergraduate reference departments, I had begun to look carefully at and talk to my students about the bias inherent in artificial intelligence powered search algorithms, both on the Internet and in commercial databases. I had not considered how bias in AI can impact legal research. Yet, here it was right in front of me on my students’ laptops, two weeks into our fall ALR class.
While the bias in an AI powered search in any context is deeply troubling, looking at algorithmic bias in the context of legal research is even more disturbing. In ALR class, students work with hypothetical research problems. paused to think – What about potential AI bias in real world legal practice? What if there is a large sum of money on the line? Or even more troubling, someone’s freedom from incarceration or even their life?
How can we learn more about how these algorithms function inside proprietary legal research platforms? Because the algorithms are carefully protected proprietary information, ALR instructors like us cannot talk about how these AI searches work, and how our students can assure themselves they have done the most thorough searches. As instructors in effective legal research, even we law librarians do not know what is going on inside the black box of AI based searches in legal research platforms..
Researchers like Safiya Noble, Eli Pariser, and others look at AI and bias in information retrieval, largely focused on Google and other internet search tools and social media. But who is looking at the legal research platforms?
As AI searching in legal research platforms continues to evolve, with more students and attorneys defaulting to natural language searching over the terms and connectors searching or using finding tools such as indexes and secondary sources, what does that mean for legal research? How do we break the black box, see what is going on within those searches and results? How can we make sure bias in AI does not creep into legal research and then into our legal ecosystem? A few articles address bias in legal research, see for example, The Algorithm as a Human Artifact: Implications for Legal Search, Susan Nevelow Mart, 109 LLJ 397 (2017). And, more recently, articles on Critical Legal Research such as Critical Legal Research: Who Needs It? 112 LLIBJ 237 (2020), in which the author observes, “..law librarians also have an obligation to interrogate claims of objectivity and neutrality, to promote transparency, and to do their part to ensure that our legal system becomes more, not less, equitable. To do so–and to answer this article’s titular question–we will all need to practice and teach CLR in the age of AI”.
As we move further and further down the AI rabbit hole, with more and more computer assisted or even lead tasks, like brief analysis tools, what role does the law librarian play in teaching students about AI and search? How are we going to insert ourselves as the experts in legal research into the challenge of AI in legal research?