This semi-monthly column by Sabrina I. Pacifici highlights news, government reports, industry white papers, academic papers and speeches on the subject of AI’s fast paced impact on the banking and finance sectors. The chronological links provided are to the primary sources, and as available, indicate links to alternate free versions. Each entry includes the publication name, date published, article title and abstract.
FT.com, October 15, 2023. Gary Gensler urges regulators to tame AI risks to financial stability [free to read]. SEC head warns reliance on a few data models could unleash a financial crisis within a decade.
Yahoo Finance. Chicago AI Conference: Reshaping the Future of Finance with AI in Chicago, October 14, 2023. Never before has technology evolved as rapidly as it has over the past century. According to McKinsey forecast, more than 70% of companies will adopt at least one AI solution, furthermore their stimulation showed that AI could generate additional growth of 1.2% of global GDP. It is already faster than humans in uncovering insights from huge amounts of data, contributing to fraud detection and cancer diagnosis. However, beneath the surface of this futuristic promise lies a dualistic reality. While AI offers great benefits, it also presents challenges and pitfalls. AI-driven bank assistants, hiring tools, and healthcare diagnostic systems have displayed alarming biases based on gender and race. With a vision to shape global AI agendas and foster public-private collaboration, the Chicago AI Conference is set to take place on October 26, 2023, in Chicago. Organized by the AI 2030 initiative, FinTech4Good and co-hosted by partners including Chicago AI Council, Star Consulting, Evolving Summit, this event brings together industry leaders, innovators, and experts from around the globe to discuss the latest technology advancements, cutting edge use cases, responsible artificial intelligence (AI) best practices, and real-world applications of AI in regulated industries such as financial services, insurance, payments, healthcare, and so on.
Cryptopolitan via MSN, October 12, 2023. Warning from the Bank of England: AI Biases Could Imperil Financial Stability
Bank Underground, Kathleen Blake, Bank of England, October 11, 2023. Bias, fairness, and other ethical dimensions in artificial intelligence. Artificial intelligence (AI) is an increasingly important feature of the financial system with firms expecting the use of AI and machine learning to increase by 3.5 times over the next three years. The impact of bias, fairness, and other ethical considerations are principally associated with conduct and consumer protection. But as set out in DP5/22, AI may create or amplify financial stability and monetary stability risks. I argue that biased data or unethical algorithms could exacerbate financial stability risks, as well as conduct risks. The term algorithm means a set of mathematical instructions that will help calculate an answer to a problem. The term model means a quantitative method that applies statistical, economic, financial or mathematical theories, techniques and assumptions to process input data into output data. Traditional financial models are usually rules-based with explicit fixed parameterisation, AI models are able to learn the rules and alter model parameterisation iteratively. AI models have many benefits in the financial sector and can be used to help consumers better understand their financial habits and the best options available to them. For example, by automating actions that best serve customer interests such as automatically transferring funds across accounts when a customer is facing overdraft fees.
Forbes, October 11, 2023. Picking AI’s Brain on Finance – Let’s tackle the issue of AI in finance. More specifically, we can ponder different outcomes that we might see on a future timeline, and apply those to what we’re doing in modern banking and elsewhere. Lisa Huang has some interesting thoughts on this. After starting at Goldman Sachs in 2008, she worked on Betterment on a robo-advisor program, and now works at Fidelity in the field of AI for asset and wealth management. Catching up with Huang at a recent event, we can see some of her expertise in using AI for presentations, and also, some interesting insights on where things are going.
NDTV, October 11, 2023. Research Finds Artificial Intelligence Can Enhance Worker Productivity By Up To 35%. Overall findings show that generative AI used in collaboration with humans can significantly increase worker productivity and retention.
Bloomberg, October 6, 2023. US Warns EU’s Landmark AI Policy Will Only Benefit Big Tech – New US analysis studies economic, technical impact of EU law; EU Parliament’s proposal for generative AI is ‘vague,’ US says.
Bloomberg, October 5, 2023. Stability CEO Touts AI as a Weapon Against Inequality. On this episode of Exponentially With Azeem Azhar, Emad Mostaque argues open source technology will make AI a boon for the Global South.
American Banker, October 5, 2023. Federal Reserve Vice Chair for Supervision Michael Barr said generative artificial intelligence could lead to a cybersecurity “arms race” for banks. In a live-streamed, moderated discussion on cyber risk in the banking sector, Barr called cybersecurity a “top risk” that banks and regulators should be addressing proactively, especially in light of the rapid evolution of digital technology and related threats.
CIO, September 29, 2023. Should finance organizations bank on Generative AI? Finance and banking organizations are looking at generative AI to support employees and customers across a range of text and numerically-based use cases.
ECB, September 23, 20223. How tech is shaping banking supervision
NBER. Generative AI at Work. Erik Brynjolfsson, Danielle Li, Lindsey R. Raymond. Working Paper 31161. New AI tools have the potential to change the way workers perform and learn, but little is known about their impacts on the job. In this paper, we study the staggered introduction of a generative AI-based conversational assistant using data from 5,179 customer support agents. Access to the tool increases productivity, as measured by issues resolved per hour, by 14% on average, including a 35% improvement for novice and low-skilled workers but with minimal impact on experienced and highly skilled workers. We provide suggestive evidence that the AI model disseminates the best practices of more able workers and helps newer workers move down the experience curve. In addition, we find that AI assistance improves customer sentiment, increases employee retention, and may lead to worker learning. Our results suggest that access to generative AI can increase productivity, with large heterogeneity in effects across workers.
NBER. Exporting the Surveillance State via Trade in AI. Martin Beraja, Andrew Kao, David Y. Yang & Noam Yuchtman. Working Paper 31676. DOI 10.3386/w31676. Issue Date September 2023. We document three facts about the global diffusion of surveillance AI technology, and in particular, the role played by China. First, China has a comparative advantage in this technology. It is substantially more likely to export surveillance AI than other countries, and particularly so as compared to other frontier technologies. Second, autocracies and weak democracies are more likely to import surveillance AI from China. This bias is not observed in AI imports from the US or in imports of other frontier technologies from China. Third, autocracies and weak democracies are especially more likely to import China’s surveillance AI in years of domestic unrest. Such imports coincide with declines in domestic institutional quality more broadly. To the extent that China may be exporting its surveillance state via trade in AI, this can enhance and beget more autocracies abroad. This possibility challenges the view that economic integration is necessarily associated with the diffusion of liberal institutions.
Machine Learning as a Tool for Hypothesis Generation
Jens Ludwig, University of Chicago and NBER
Sendhil Mullainathan, University of Chicago and NBER
GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models
Timothy F. Bresnahan,Stanford University
Does Human-Algorithm Feedback Loop Lead to Error Propagation? Evidence from Zillow’s Zestimate
Runshan Fu, New York University
Ginger Zhe Jin, University of Maryland and NBER
Meng Liu, Washington University in St. Louis
The Value of External Data for Digital Platforms: Evidence from a Field Experiment on Search Suggestions
Xiaoxia Lei, Shanghai Jiao Tong University
Yixing Chen, Notre Dame University