AI in Finance and Banking, December 31, 2024

This semi-monthly column highlights news, government documents, NGO/IGO papers, conferences, industry white papers and reports, academic papers and speeches, and central bank actions on the subject of AI’s fast paced impact on the banking and finance sectors. The chronological links provided are to the primary sources, and as available, indicate links to alternate free versions.


NEWS:

Biggest banking tech stories: AI’s rise and BaaS collapse. The American Banker. December 31, 2024. This year’s compilation of the most important technology news in the banking industry includes a widespread data breach that impacted Bank of America, the downfall of banking-as-a-service, the rise of artificial intelligence and more.


Four predictions for AI in 2025 [unpaywalled] FT.com, December 31, 2024. ….Will the stock market’s AI boom continue? With Big Tech in the midst of an AI race that its leaders believe will determine the future shape of their industry, one of the main forces behind the AI capital spending boom will remain in place. Also, as some companies start to claim big — if unproven — results from applying the technology in their own businesses, many others will feel they have to keep spending, even if they have not worked out yet how to use AI productively. Whether this is enough for investors to keep throwing their money at AI is another matter. That will depend on other factors, such as the stock market’s confidence in the deregulatory and tax-cutting intentions of the new Trump administration and the readiness of the Federal Reserve to continue with monetary policy easing. It all points to a highly volatile year, with some big corrections along the way. But with enough liquidity, Wall Street could succumb to AI hype for some time yet.


How AI could change the work of bank CEOs [unpaywalled] American Banker, December 19, 2024. Banks have long touted generative AI’s possible benefits for client service or internal operations, but financial institutions are now considering how the technology might help free up time for CEOs and their teams. Leaders of small financial institutions, particularly credit unions and community banks, are hoping generative AI will help them punch above their weight.



Beware Of Shadow AI – Shadow IT’s Less Well-Known Brother. Security Week. December 23, 2024. While AI tools can enable employees to be innovative and productive, significant data privacy risks can stem from their usage. Shadow IT is a fairly well-known problem in the cybersecurity industry. It’s where employees use unsanctioned systems and software as a workaround to bypass official IT processes and restrictions. Similarly, with AI tools popping up for virtually every business use case or function, employees are increasingly using unsanctioned or unauthorized AI tools and applications without the knowledge or approval of IT or security teams – a new phenomenon known as Shadow AI. Research shows that from 50 to 75% of employees are using non-company issued AI tools and the number of these apps is growing substantially. A visibility problem emerges. Do companies know what is happening on their own networks? According to our research, beyond the popular use of general AI tools like ChatGPT, Copilot and Gemini, another set of more niche AI applications being used at organizations include:

  • Bodygram (a body measurement app)
  • Craiyon (an image generation tool)
  • Otter.ai (a voice transcription and note taking tool)
  • Writesonic (a writing assistant)
  • Poe (a chatbot platform by Quora)
  • HIX.AI (a writing tool)
  • Fireflies.ai (a note taker and meeting assistant)
  • PeekYou (a people search engine)
  • Character.AI (creates virtual characters) and
  • Luma AI (3D capture and reconstruction).
  • Why Shadow AI Is A Major Cybersecurity Risk – Even though AI brings great productivity, Shadow AI introduces different risks:
  • Data leakage: Studies show employees are frequently sharing legal documents, HR data, source code, financial statements and other sensitive information with public AI applications. AI tools can inadvertently expose this sensitive data to the public, leading to data breaches, reputational damage and privacy concerns (e.g., Samsung).
  • Compliance risks: Feeding data into public platforms means that organizations have very little control over how their data is managed, stored or shared, with little knowledge of who has access to this data and how it will be used in the future. This can result in non-compliance with industry and privacy regulations, potentially leading to fines and legal complications.
  • Vulnerabilities to cyberattacks: Third-party AI tools could have built-in vulnerabilities that a threat actor could exploit to gain access to the network. These tools can lack security standards compared to an organization’s internal security systems. Shadow AI can also introduce new attack vectors making it easier for malicious actors to exploit weaknesses.
  • Lack of oversight: Without proper governance or oversight, AI models can spit out biased, incomplete or flawed outputs. Such biased and inaccurate results can bring harm to organizations. An employee using an unsanctioned tool might produce results that are contradictory to the results produced by official company systems. This can cause errors, inefficiencies, confusion, and delays, proving  costly for the business.
  • Legal risks: Unsanctioned AI might access intellectual property from other businesses, making the organization liable for any resulting copyright infringement. It could generate biased outcomes that violate anti-discrimination laws and policies or produce erroneous results that are shared with customers and clients. In all of these cases, organizations could potentially face penalties, be held accountable and liable for any resulting violations and damage caused.

The Great Rewiring: Technology Trends Reshaping Financial Services Insights. OPCO. December 23, 2024. A major asset manager recently showed us their alpha generation system – sophisticated algorithms processing alternative datamand market signals to drive investment decisions.


GOVERNMENT DOCUMENTS:

Department of the Treasury – Report on the Uses, Opportunities and Risks of Artificial Intelligence in the Financial Services Sector, December 2024. “This report provides background on the use of AI in financial services based on respondents’ comments and building on observations from previous Treasury reports and  stakeholder engagement,5 highlights Treasury’s ongoing efforts to evaluate recent developments in AI, and summarizes key recommendations from respondent feedback.  Next, the report details the respondents’ comments on current and potential AI use cases,  along with the associated risks, opportunities, and proposed risk mitigation strategies  Finally, the report identifies policy considerations based on Treasury’s analysis of the AI  RFI responses and lays out potential next steps to be considered by Treasury, government agencies, and the financial services sector, including:
1. Aligning definitions of AI models and systems applicable to the financial services sector to facilitate interagency collaboration and coordination with stakeholders;
2. Considering providing additional clarification on standards for data privacy, security, and quality for financial firms developing and deploying AI;
3. Considering expanding consumer protections to mitigate consumer harm;
4. Considering clarifying how to ensure uniform compliance with current consumer protection laws that apply to existing and emerging technologies and providing additional guidance to assist firms as they assess AI models and systems for compliance;
5. Enhancing existing regulatory frameworks and develop consistent federal-level standards to mitigate risks associated with potential regulatory arbitrage and conflicting state laws while clarifying supervisory expectations for financial firms developing and deploying AI; and
6. Facilitating domestic and international collaboration among governments, regulators, and the financial services sector and pursue public-private partnerships to share information and best practices, promote consistency for standards, and monitor concentration risk.”


NGO/IGOs

Artificial Intelligence, Dollar, Growth, and Debt Drove 2024 Blog Readership. Complex global economic challenges, and the careful policy responses that the world needs to foster sustainable growth and financial stability, were the most popular. IMF, December 30, 2024. The biggest issues confronting the global economy drew the broadest interest among IMF blog readers around the world this year. Artificial intelligence attracted the greatest interest. The US dollar’s role as the preeminent reserve currency, even after losing ground to nontraditional currencies in global foreign exchange reserves, was also a major focus. Prospects for economic growth amid high debt and rising geoeconomic fragmentation also earned some of the most audience attention, along with subjects ranging from housing affordability and private credit markets to reforms that enhance productivity.


Artificial Intelligence and tourism. OECD, December 18, 2024. The G7/OECD policy paper on Artificial Intelligence and tourism highlights the potential to harness AI as a tool to promote innovation and the sustainable development of tourism. It discusses the opportunities and risks AI brings, and what this means for tourists, businesses, destinations and governments. Key policy issues are identified, including the need to: i) put in place robust data and consumer protection measures as AI is used to create personalised tourist experiences; ii) monitor the impact on tourism jobs and protect and prepare workers, as AI used to improve operational efficiency; and iii) support tourism businesses, and SMEs in particular, to keep pace with rapid AI developments and comply with evolving legal and regulatory frameworks, while fostering a dynamic environment for innovation.

Posted in: AI in Banking and Finance, Big Data, Cybercrime, Cybersecurity, Privacy, Technology Trends