This semi-monthly column highlights news, government documents, NGO/IGO papers, industry white papers, academic papers and speeches on the subject of AI’s fast paced impact on the banking and finance sectors. The chronological links provided are to the primary sources, and as available, indicate links to alternate free [unpaywalled] versions.
NEWS:
WSJ via Semafor – The internet is proving to be too limited for training artificial intelligence models. Bots like ChatGPT learn from billions of pieces of text data scraped from books and sites like Wikipedia, but the supply of publicly available information will dry up in the next few years, researchers said. That’s forced some companies to turn to AI-generated “synthetic data,” The Wall Street Journal [read free – no paywall] reported [For Data-Guzzling AI Companies, the Internet Is Too Small] Firms such as OpenAI and Anthropic are working to find enough information to train next-generation artificial-intelligence models.] But some experts have warned that this would essentially equate to digital “inbreeding” that could lead models to collapse and produce nonsense. ChatGPT maker OpenAI is reportedly looking into alternatives, including using transcriptions of public YouTube videos. “The biggest uncertainty,” one researcher said, “is what breakthroughs you’ll see.”
CFO, April 1, 2024. 55% of Business Leaders Concerned About the Safety of Future Bank Deposits: Report. The majority (77%) of depositors said they would be willing to give up a portion of their returns to a bank that guarantees their deposits’ safety. CFOs and finance executives have justifiable concerns when it comes to the safety of their deposits. Alongside the failures of Silicon Valley Bank, First Republic Bank, and Signature Bank in 2023, local banks have struggled to maintain strong balance sheets in 2024. These factors, despite a rise in optimism about the global economy itself, may be worrying to CFOs regarding their local banks’ integrity and solvency. These concerns are evident in recent survey data. According to financial services firm Ampersand’s recently released depositor’s priority survey report, more than half (55%) of the over 100 executives and business leaders they surveyed said they were concerned with the safety of their company’s future bank deposits.
Bloomberg [unpaywalled], March 27, 2024. Generative AI Is Coming for Your Bank. Maybe. Big banks are scrambling to work out what to do with generative artificial intelligence: how to use it to make some of their people smarter or free up others to do only higher-value tasks, and how to ingest and process data more rapidly, speed up decision making and cut costs. Every bank fears their competitors getting good at AI before they do. Bay Area venture capitalists have a different warning, though: Lenders are missing the threat from everywhere else. “For banks, when I talk about AI, I tell them: What you should be worried about is what if it works?” Angela Strange, a general partner at Andreessen Horowitz, told me recently. Tech investors reckon that supersmart agents will soon upend the business model of traditional banks, but there are caveats. What if AI in the hands of bank customers automates the dull work of shopping for the best rates on simple financial products at irresistible speeds? It probably won’t be long before generative AI bots can look after our money with utmost efficiency. Great for customers, but not so much for banks. But just because it’s possible doesn’t mean it will be done. There are some heavy caveats to this vision. For instance, it could have deeply detrimental effects on financial stability. I’ll return to these. But the direction of travel is real and tells us a lot about the challenges banks face, especially smaller ones.
FT.com [unpaywalled], April 1, 2024. US and UK sign landmark agreement on testing safety of AI. Allies reach world’s first bilateral deal as global governments seek to assess and regulate risks from emerging technology. See also BBC – AI Safety: UK and US sign landmark agreement
Bloomberg [unpaywalled], March 25, 2024. How AI Could Rebuild America’s Middle Class What if new technology helps workers, instead of hurting them? AI is an incredibly exciting space, provoking both great wonder and fear. One of the big worries obviously is: What will happen to everyone’s job? Will it make more people’s livelihoods obsolete, causing even greater inequality than we have now. On this episode, we speak with an economist who argues that this concern is not just misplaced, but exactly wrong. MIT’s David Autor — famous for his work on the China shock — argues that the last 40 years of advances in computer technology have been a major driver of inequality, but that AI should be seen as an entirely different paradigm. He argues that human work, aided by AI, will remove the premium captured by extremely high paid, experienced professionals (like doctors or top lawyers) as their capabilities become more diffuse. He also discusses what policy choices government should be making to improve the odds that AI will prove societally beneficial. This transcript has been lightly edited for clarity.
Free Press Journal, March 19, 2024. Analysis: Artificial Intelligence And Finance – Are They Besties? The widespread use of AI raises concerns about data privacy, security, and transparency, necessitating robust regulatory oversight and ethical considerations.
American Banker, February 27, 2024. How payments executives are leading the AI revolution
PAPERS:
NBER, The Economics of Artificial Intelligence: Health Care Challenges. Ajay Agrawal, Joshua Gans, Avi Goldfarb and Catherine Tucker, editors. In sweeping conversations about the impact of artificial intelligence on many sectors of the economy, health care has received relatively little attention. Yet it seems unlikely that an industry that represents nearly one fifth of the economy could escape the efficiency- and cost-driven disruptions of AI. The Economics of Artificial Intelligence: Health Care Challenges brings together contributions from health economists, physicians, philosophers, and scholars in law, public health, and machine learning to identify the primary barriers to entry for AI in the health care sector. Across original papers and in wide-ranging responses, the contributors analyze barriers of four types: incentives; management; data availability; regulation. They also suggest that AI has the potential to improve outcomes and lower costs. Understanding both the benefits of and barriers to AI adoption is essential for designing policies that will affect the evolution of the healthcare system.
NBER, Market Power in Artificial Intelligence Working Paper 32270. DOI 10.3386/w32270. Issue Date This paper surveys the relevant existing literature that can help researchers and policy makers understand the drivers of competition in markets that constitute the provision of artificial intelligence products. The focus is on three broad markets: training data, input data, and AI predictions. It is shown that a key factor in determining the emergence and persistence of market power will be the operation of markets for data that would allow for trading data across firm boundaries.
NBER, Scenarios for the Transition to AGI .Working Paper 32255. DOI 10.3386/w32255. Issue Date We analyze how output and wages behave under different scenarios for technological progress that may culminate in Artificial General Intelligence (AGI), defined as the ability of AI systems to perform all tasks that humans can perform. We assume that human work can be decomposed into atomistic tasks that differ in their complexity. Advances in technology make ever more complex tasks amenable to automation. The effects on wages depend on a race between automation and capital accumulation. If automation proceeds sufficiently slowly, then there is always enough work for humans, and wages may rise forever. By contrast, if the complexity of tasks that humans can perform is bounded and full automation is reached, then wages collapse. But declines may occur even before if large-scale automation outpaces capital accumulation and makes labor too abundant. Automating productivity growth may lead to broad-based gains in the returns to all factors. By contrast, bottlenecks to growth from irreproducible scarce factors may exacerbate the decline in wages.
GOVERNMENT DOCUMENTS:
Treasury Releases Report on Managing AI Specific Cybersecurity Risks in Financial Sector, March 28, 2024. The U.S. Department of the Treasury released a report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector. The report was written at the direction of Presidential Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Treasury’s Office of Cybersecurity and Critical Infrastructure Protection (OCCIP) led the development of the report. OCCIP executes the Treasury Department’s Sector Risk Management Agency responsibilities for the financial services sector. “Artificial intelligence is redefining cybersecurity and fraud in the financial services sector, and the Biden Administration is committed to working with financial institutions to utilize emerging technologies while safeguarding against threats to operational resiliency and financial stability,” said Under Secretary for Domestic Finance Nellie Liang. “Treasury’s AI report builds on our successful public-private partnership for secure cloud adoption and lays out a clear vision for how financial institutions can safely map out their business lines and disrupt rapidly evolving AI-driven fraud.” In the report, Treasury identifies significant opportunities and challenges that AI presents to the security and resiliency of the financial services sector. The report outlines a series of next steps to address immediate AI-related operational risk, cybersecurity, and fraud challenges.