Now I am not personally for, or against, the use of Advanced Technology, only actively monitoring the various emerging developments in this field. So I was both curious and shocked to see the MIT Future Tech group announcing – “A new public database, the AI Risk Repository, lists ALL the ways AI could go wrong. Its creators hope their work could lead to further research to determine which risks to take more seriously.”
The AI Risk Repository has three parts: At the present time, there are an estimated 777 ways that artificial intelligence could go awry. This is a sobering thought. The good news is that often the first step in addressing any problem is acknowledging that a problem exists and, to this end, scientists and researchers now have a functional database listing all 700+ risks. It is also important to note that in addition to documenting 777 potential AI Risks, the ‘AI Incident Database’ now includes over 3,000 real-world instances where AI systems have caused or nearly caused harm.
The database overview explains the three parts of the AI Risk Repository:
- The AI Risk Database captures 700+ risks extracted from 43 existing frameworks, with quotes and page numbers.
- The Causal Taxonomy of AI Risks classifies how, when, and why these risks occur.
- The Domain Taxonomy of AI Risks classifies these risks into seven domains (e.g., “Misinformation”) and 23 subdomains (e.g., “False or misleading information”).
Given that the ILTA Conference just ended last week, I heard from an attendee that at one session they asked for a show of hands from those currently using Gen AI, with less than 50% raising their hands. It would appear that many are just kicking the tires on vendor products.
How can I use the Repository?
The AI Risk Repository provides:
- An accessible overview of the AI risk landscape.
- A regularly updated source of information about new risks and research.
- A common frame of reference for researchers, developers, businesses, evaluators, auditors, policymakers, and regulators.
- A resource to help develop research, curricula, audits, and policy.
- An easy way to find relevant risks and research.
HERE’S THE THING: Let’s consider what having a RISK database NOW means for your law firm.
I’ve been told that clients are making it a condition of their outside counsel guidelines, that firms disclose all the AI, of any kind, that they are currently using and identify any risks. From the inception of the discussion about Gen AI, it’s been assumed that firms using this should disclose usage to clients and get their consent. But consent must be informed!
So imagine now giving your client a list of 700+ risks and then as long for consent. Conversely, imagine not disclosing any of those risks and receiving consent. Now what happens if something goes wrong, and your client learns that you were using Gen AI in that litigation or transaction? Do you think they will not sue your firm claiming they were not FULLY informed of the risks?
Even giving clients a list of 700+ risks may not protect you. When something goes wrong, the client will drill down on the one risk that materialized. And you should expect that client will be arguing that your firm did not FULLY discuss and disclose what this really meant in practical application for the matter you were responsible for handling.
This MIT team combed through Peer Reviewed journal articles and pre-print databases that detail AI risks. The most common risks centered around:
- AI System Safety and Robustness (76%);
- Unfair Bias and Discrimination (63%); and
- Compromised Privacy (61%).
The database also shows the majority of risks from AI are identified only after a model becomes accessible to the public.
TRUTH BOMB: Just 10% of the risks studied were spotted BEFORE deployment!
- Discrimination/Toxicity;
- Privacy/Security;
- Misinformation;
- Malicious actors/Misuse;
- Human-computer interaction;
- Socioeconomic/Environmental;
- AI system safety, failures/limitations.
As a member of the ‘MIT Technology Review Global Insights Panel’ I obtained a copy of The AI Risk Repository report, documenting 777 potential risks Advanced AI systems could pose. “What our database is saying is, the range of risks is substantial, not all of which can be checked ahead of time,” says Neil Thompson, director of MIT Future Tech. Therefore, you should monitor models after they’re launched to regularly review the risks they present post-deployment.
A recent Goldman Sachs research report states:
No evidence yet that the tech companies making generative AI – or the companies using it – had seen any noticeable increases in revenue despite the spending.” Meanwhile the cost of training generative AI models is expected to exceed $1 trillion US this year.
The full text report can be read here – Goldman Sachs Report – Gen AI: Too Much Spend,Too Little Benefit? Issue 129, June 25, 2024. Global Macro Research:
Investors should consider this report as only a single factor in making their investment decision. Tech giants and beyond are set to spend over $1tn on AI capex in coming years, with so far little to show for it. So, will this large spend ever pay off? MIT’s Daron Acemoglu and GS’ Jim Covello are skeptical, with Acemoglu seeing only limited US economic upside from AI over the next decade and Covello arguing that the technology isn’t designed to solve the complex problems that would justify the costs, which may not decline as many expect. But GS’ Joseph Briggs, Kash Rangan, and Eric Sheridan remain more optimistic about AI’s economic potential and its ability to ultimately generate returns beyond the current “picks and shovels” phase, even if AI’s “killer application” has yet to emerge. And even if it does, we explore whether the current chips shortage (with GS’ Toshiya Hari) and looming power shortage (with Cloverleaf Infrastructure’s Brian Janous) will constrain AI growth. But despite these concerns and constraints, we still see room for the AI theme to run, either because AI starts to deliver on its promise, or because bubbles take a long time to burst…”
Also, from my friend, Diane Brady, Executive Director at Fortune: “…the number of Fortune 500 companies citing AI as a business risk is up 473.5% since 2022.”
And Arize AI, who helps companies better deploy AI, noted that 281 companies cited AI as a risk factor in their latest Annual Report.
This article began as an exchange on LinkedIn, so please feel free to continue the conversation with me there.