In the legal world, we’re still mostly at the “promise of change” stage despite the unprecedented numbers among us experimenting with, adopting and building software tools and features that all but the most technologically literate would have considered witchcraft a mere twelve months ago. I spend a lot of time with people at all points of the awareness and engagement spectrum, and I’ve never experienced a truer realization of the William Gibson line: The future is already here, it’s just not evenly distributed. Whether speaking with lawyers and law students who haven’t gotten around to trying ChatGPT or collaborating with post-doc explainable and legal AI experts with 20+ years of machine learning and Natural Language Processing experience, I’m no closer to understanding in what way and precisely when permanent change will come, but I am unshakeably convinced that change will be enormous, uneven, disruptive and, in many cases, invisible.
To use a narrow example, the costs of using AI to extract critical insights from complex documents or sets of documents, and to create high quality summaries of those same documents, all at the level of highly capable humans, have dropped by orders of magnitude in a very short time. When something of potential value that used to cost tens or hundreds of dollars to create now only costs pennies, it doesn’t necessarily follow that we suddenly want more of that thing, especially if high costs previously meant we sought ways to limit reliance on those things. This speaks to the tension of the moment. While the builders are discovering new paths and exploring new ideas, the users haven’t quite thought through what it is they want from this new magic wand. Do you want a summary of every case in a database? Probably not, but you may appreciate seamless, on-demand and instant access to a summary of case you read or citation you see. The tech creates the potential for change, but nothing changes until behaviours change.
To go a little further, consider the demonstrated ability of language models (with the right programming pipelines and parameters) to surface, analyze, extrapolate, interpolate, reason, argue, self-evaluate, course-correct, invent, challenge, polish, organize, format, etc…everything from simple passages, to complex ideas to whole libraries. While not quite available now in off-the-shelf products for all research and writing task known to the legal community, every last one of these capabilities (and more) are quietly (or not so quietly) being baked into the software we use every day. Everything from our legal research platform to our document management or practice management systems to the full suite of our Microsoft and Google tools. As (not if, or even when) this happens, adoption rates will be extremely uneven, and while not all early adopters can be assured of landing on the optimal tools or approaches, some will find themselves far out ahead of their competitors and even of their own expectations and ambitions. For those taking it slow – the penguins who watch from the top of the ice floe to see if the ones who jumped in first get eaten – the penalties of following slowly will initially be non-existent to negligible. Right up until they aren’t.
Bearing in mind my admission that I’m still fuzzy on the whole “what will change and when that change will come” thing, I’d still like to offer you two pieces of advice:
- try new generative AI tools early and often (and not just the ones target to legal) to get a feel for what’s possible;
- look for the greatest near term value where narrow AI features are woven into the tools you use to address your everyday practice challenges.
For the most part, the first iteration of standalone products built around the generative AI outputs of large language models are experiments in finding value and product-market fit, but useful experiments nonetheless that need user feedback to reveal and realize the true potential in the next iteration. Embedded capabilities, on the other hand, get you closer to the sense everything and nothing is changing as the value weaves into your workflow.
Oh…and good luck in the brave new world.
Author’s reply to a reader’s comment: I think every lawyer should experiment with ChatGPT on non-client and non-legal material. Two reasons: to build an awareness and some skills around “prompting” as it’s not searching and it’s not Question-Answer; and second, to broaden their exposure to the various ways the tool can be wrong. Think of this as their own personal Karate Kid type training to get the intuition on when and how to use similar tools effectively, including on legal topics.
To the “safe for use with client confidential information” question, with traditional tech companies (especially Microsoft) embedding the capabilities in the existing tools, the exposure is already happening, or at the very least, imminent. Sticking with Microsoft Word as an example, the trust granted to a piece of software for its core purpose will extend into trust for engagement with generative AI within that same software application. Suppliers have many options to secure user data from unauthorized use by the operating language models to varying degrees, and legal users should be asking generative AI suppliers the same security questions they would ask of any other enterprise software supplier.
Editor’s Note – Republished with permission of the author with first publication on Slaw.