Artificial Intelligence, ABA Formal Opinion 512 And Access To Justice

The American Bar Association’s Formal Opinion 512, “Generative Artificial Intelligence Tools,” is a mixed bag.  The Standing Committee on Ethics and Professional Responsibility took a significant step in the right direction by legitimizing the idea that it is appropriate for lawyers to use AI. Equally important, it suggests guardrails necessary for the safe use of artificial intelligence. It also provides valuable guidance on other related topics, with client confidentiality significant.

Unfortunately, Formal Opinion 512 misfires on the issue of attorney fees. It should not be surprising that the ABA would face challenges regulating a complex new technology. For the reasons explained exactly one year ago in my article AI and the Organized Bar: Lessons from the eLawyering Project, the ABA’s nature and structure sometimes impede optimal decision-making on public-interest issues.

This issue is significant beyond its potential impact on lawyers’ bottom lines. The ABA’s overly restrictive policy can adversely affect many Americans’ access to high quality and affordable legal services.

Context of Formal Opinion 512

The earliest uses of AI in the legal field involved issues like discovery in litigation, contract analytics, basic legal research, and even predicting ways judges might rule on a legal question based on the judge’s previous rulings.

Formal Opinion 512 deals with a more sophisticated and controversial subset of AI called Generative Artificial Intelligence (GAI). GAI can create what appear to be original work products, such as letters, contracts, briefs, and other legal documents. The ABA’s distinction between AI and GAI is correct, but to avoid confusion, I’ll use the more common acronym AI in this article, except when quoting an ABA document.

The Standing Committee did not write Formal Opinion 512 on a blank slate. Understanding previous guidance helps understand the ABA’s incremental steps in addressing ethical considerations around AI use. Previous guidance includes:

The U.S. Patent and Trademark Office and the state bars of California, Florida, New Jersey, New York, the District of Columbia, and Pennsylvania have all issued guidance on ethical issues related to lawyers’ use of AI.

The ABA issued two relevant resolutions:

  • Resolution 112 (2019) urged courts and lawyers to address key ethical and legal issues.
  • Resolution 604 (2023) urged AI developers and governmental entities to ensure AI use is subject to human oversight, to hold parties accountable for AI-related consequences, and to guarantee transparency and traceability.

The ABA followed Resolution 604 by creating a Task Force on Law and Artificial Intelligence.

Things the ABA’s Standing Committee Got Right

The best thing about Formal Opinion 512 is that it gives a qualified green light to the idea that it is appropriate for lawyers to use AI. The importance of this should not be underestimated. The ABA tends to be conservative (with its approach to culture war-type issues like abortion, gun rights and DEI-type issues being notable exceptions). Many lawyers three decades ago were initially reluctant to adopt the Internet in their law practices. Acknowledging the legitimacy of another new technology, AI, as a tool for lawyers must have been a hurdle for some ABA members. The Standing Committee on Ethics and Professional Responsibility cleared this hurdle quite well.

The Standing Committee was also on target by clarifying that using AI implicates the duty of competence in three ways. Lawyers need to:

  1. Know the AI tools available
  2. Understand the capabilities and liabilities of any AI tool the lawyer chooses to use
  3. Ensure that the AI tool does not return inaccurate information

Merely ignoring AI may mean a lawyer fails in the duty of competence. Lawyers who use AI must understand it and take reasonable measures to guard against using inaccurate information in the form of “hallucinations” or otherwise.

Formal Opinion 512 performs a major public service by raising concerns about confidentiality, such as the potential exposure of client data during AI tool interactions.

ABA Model Rules 1.6, Model Rule 1.9(c), and Model Rule 1.18(b) require lawyers to keep all information about representing a client or former and prospective clients confidential. These duties apply regardless of the source of information unless the client gives informed consent, disclosure is impliedly authorized to carry out the representation, or an exception permits disclosure.

There is nothing magical about AI use that excuses lawyers from these duties. Indeed, the nature of AI provides multiple opportunities for the improper disclosure of client information. Formal Opinion 512’s analysis and recommendations on confidentiality are excellent. I have little to add, so let’s move instead to what I consider the opinion’s most problematic aspect: its handling of legal fees.

Legal Fees: A Missed Opportunity

As Mark C. Palmer notes in his Attorney At Work article Be Reasonable, People! AI’s Impact on Legal Fees, the intersection between legal ethics and legal billing practices is one of the most pressing issues in legal ethics today. It’s essential to get it right.

Formal Opinion 512 states “lawyers who bill clients an hourly rate for time spent on a matter must bill for their actual time.” No big controversy there. There is a contract between the lawyer and the client, and fees must accord with that contract. “Value billing” is a no-go, absent an agreement with the client.

The opinion goes astray when considering alternative billing models like flat fees:

The factors set forth in Rule 1.5(a) also apply when evaluating the reasonableness of charges for GAI tools when the lawyer and client agree on a flat or contingent fee. For example, if using a GAI tool enables a lawyer to complete tasks much more quickly than without the tool, it may be unreasonable under Rule 1.5 for the lawyer to charge the same flat fee when using the GAI tool as when not using it. “A fee charged for which little or no work was performed is an unreasonable fee.” [Footnotes omitted].

It is penalizing lawyers for efficiency that is unreasonable. I can’t explain it better than the ever-astute Greg Siskind, so I’ll quote his comment, as reproduced in his LinkedIn feed:

You select a car based on a lot of subjective factors, including the reputation of the manufacturer and the perceived quality of the vehicle. Drivers usually don’t care whether the car is completely handmade or built with robots. But they do tend to care about things like the buying experience, the car’s look and feel, and the reputation and trustworthiness of the manufacturer. The buyers don’t care about how much automation was used in making the car. If a manufacturer produces a great product at a much lower cost to make the product because of superior technology, buyers are happy to reward them with higher profits. We would think it absurd if the government said that a car manufacturer had to lower their prices because of the savings they achieved through robots and automation. That would be the role of a competitive market. Why is law different?

Greg Siskind, as quoted in Formal Opinion 512 and the Reasonableness of Fees When Using AI.

The Need for Better Approaches to Legal Fees for AI-assisted Work

No doubt, the ABA’s caution on billing for AI-assisted work stems from a desire to protect clients from unanticipated harms associated with rapidly evolving AI technologies. Unfortunately, the ABA’s approach risks discouraging efficiency and innovation by protecting the billable hour model instead of encouraging alternative billing models. It limits the incentive to adopt AI, which might alleviate the large gap in access to justice for many Americans.

Small firms using AI to automate routine tasks could serve low-income clients at reduced costs.

The ABA’s approach tends to reward large corporate law firms, which can more easily absorb the costs of AI adoption and use economies of scale to maintain profitability. Smaller firms, which rely on alternative fee structures to attract clients, might struggle under the ABA’s limitations, which might limit their ability and willingness to experiment with more efficient approaches.

Flat-fee arrangements are inherently designed to reflect the value of a task, not the time spent. Imposing limitations on how much a lawyer can charge based on AI use contradicts the principle of flat-fee billing, where the client agrees to pay a fixed amount regardless of how the work is done. This could create confusion or unfair outcomes if lawyers feel pressured to base fees on effort rather than outcome.

The ABA’s focus on preventing “unjust enrichment” overlooks the nuanced nature of value in legal services. The ABA predicates its restrictive approach on the assumption that AI tools reduce the value of a lawyer’s service if they reduce time or effort. This assumption ignores the complexity of legal work:

  • Strategic Judgment: AI can draft a document but cannot replace a lawyer’s judgment in tailoring it to a client’s unique situation.
  • Risk Mitigation: The actual value of legal advice often lies in preventing future disputes or liabilities, which may not be immediately quantifiable.
  • Client Satisfaction: Clients may value speed and efficiency as much as (or more than) the lawyer’s time.
  • Efficiency should be rewarded, not punished. The ABA should encourage hybrid models that combine flat fees with premiums for expertise or contingent bonuses tied to outcomes.

It is wrong to prioritize the billable hour model over more efficient AI-assisted billing models. To remain relevant in an AI-driven world, the ABA must adopt more flexible billing approaches that encourage innovation while maintaining ethical standards.

Posted in: AI, Ethics, Legal Education, Legal Marketing, Legal Profession, Legal Research, Privacy