This semi-monthly column highlights news, government documents, NGO/IGO papers, industry white papers and reports, academic papers, conferences and speeches on the subject of AI’s fast paced impact on the banking and finance sectors. The chronological links provided are to the primary sources, and as available, indicate links to alternate free versions.
NEWS:
The Daily Upside, October 14, 2024. Mastercard Blockchain Patent Could Make Financial Audits Easier. Mastercard wants to set the record straight. The credit card giant filed a patent application for “transaction processing with complete cryptographic auditability.” This adds a layer of trackability and transparency to transactions using a blockchain ledger. Mastercard’s tech aims to give a user or business auditability of all their transactions in the event of a dispute, as well as “eliminate or strongly mitigate the possibility of the record being tampered with.” When a transaction is requested between two parties, a “third party” moderator is used to verify the details, and a digital signature is generated as a means of verification. The server that’s processing the transaction then creates its own digital signatures, adding a layer of cryptographic verification to ensure the transaction’s validity. Once the transaction is confirmed, both the digital signatures and the details of the transaction itself are stored on a blockchain, ensuring that the record is secure, tamper-proof and immutable. Mastercard is far from new to the blockchain landscape. The company has filed tons of patents that use blockchain both for cryptocurrency and other use cases, such as ticket tracking and fraud detection. It’s following pace with fintech and finance firms like PayPal, Visa, and JPMorgan Chase, all of which have filed patents for blockchain-related tech.
Washington Post, Live on October 24, 2024. How technology is reshaping the future of money and finance. Innovators, investors and experts assess how emerging technologies are shaping the global payments landscape and the future of money. Artificial intelligence, online lending, cryptocurrencies and emerging technologies are reshaping the future of finance and payments. On Thursday, Oct. 24 at 4:00 p.m. ET, innovators, investors and experts join Washington Post Live to assess the global payments landscape and the future of money.
The Daily Upside, October 10, 2024. Intel wants to give its AI a thorough inspection. The company filed a patent application for a method to “verify the integrity of a model.” Intel’s system aims to ensure that machine learning models are secure after they’ve been deployed. “After the model is deployed to the cloud service provider, an attacker can tamper with the model so that the model that is distributed to the implementing devices is not the same as the trained model,” Intel said in the filing. Intel’s tech method puts AI models through two phases of evaluation: one performed offline, and another performed online in a “trusted execution environment.” In the offline phase, a model is fed a number of inputs meant to “excite” its neurons, which comprehensively tests its capabilities. The outputs the model creates are then stored to be used as a reference. In the online testing phase, those inputs are fed to the model again, and its outputs are compared to the reference responses it created in the offline phase. If the values don’t match, Intel noted, there’s a chance the model has been tampered with. Since AI models often have multiple layers, Intel’s filing notes that this system uses a tactic called “layer-specific probing” for additional security. This means that this testing method is performed on the model layer by layer to make sure each one works as it should and no security flaws slip through the cracks. Intel’s tech hits on a growing issue as AI adoption continues its meteoric rise: observing models after deployment to make sure they don’t slip up. “This is what CIOs should be thinking about – that they need to have a trusted environment for AI,” said Brian Jackson, principal research director at Info-Tech Research Group. These capabilities may be useful to test for more than just tampering, Jackson noted. It could help keep a close eye on model degradation, such as when AI starts to hallucinate or exhibit bias.
The Office of the Comptroller of the Currency (OCC) is soliciting academic research papers on the use of artificial intelligence in banking and finance for submission by December 15, 2024. The OCC will invite authors of selected papers to present to OCC staff and invited academic and government researchers at OCC Headquarters in Washington, D.C., on June 6, 2025. Authors of selected papers will be notified by April 1, 2025, and will have the option of presenting their papers virtually. Interested parties are invited to submit papers to [email protected]. Submitted papers must represent original and unpublished research. Those interested in acting as a discussant may express their interest in doing so in their submission email. Additional information about submitting a research paper and participating in the June meeting as a discussant, is available below and on the OCC’s website.
PAPERS:
Theorizing with Large Language Models. . Working Paper 33033. DOI 10.3386/w33033. Issue Date Large Language Models (LLMs) are proving to be a powerful toolkit for management and organizational research. While early work has largely focused on the value of these tools for data processing and replicating survey-based research, the potential of LLMs for theory building is yet to be recognized. We argue that LLMs can accelerate the pace at which researchers can develop, validate, and extend strategic management theory. We propose a novel framework called Generative AI-Based Experimentation (GABE) that enables researchers to conduct exploratory in silico experiments that can mirror the complexities of real-world organizational settings, featuring multiple agents and strategic interdependencies. This approach is unique because it allows researchers to unpack the mechanisms behind results by directly modifying agents’ roles, preferences, and capabilities, and asking them to reveal the explanations behind decisions. We apply this framework to a novel theory studying strategic exploration under uncertainty. We show how our framework can not only replicate the results from experiments with human subjects at a much lower cost, but can also be used to extend theory by clarifying boundary conditions and uncovering mechanisms. We conclude that LLMs possess tremendous potential to complement existing methods for theorizing in strategy and, more broadly, the social sciences.
12 Best Practices for Leveraging Generative AI in Experimental Research. . Working Paper 33025. DOI 10.3386/w33025. Issue Date We provide twelve best practices and discuss how each practice can help researchers accurately, credibly, and ethically use Generative AI (GenAI) to enhance experimental research. We split the twelve practices into four areas. First, in the pre-treatment stage, we discuss how GenAI can aid in pre-registration procedures, data privacy concerns, and ethical considerations specific to GenAI usage. Second, in the design and implementation stage, we focus on GenAI’s role in identifying new channels of variation, piloting and documentation, and upholding the four exclusion restrictions. Third, in the analysis stage, we explore how prompting and training set bias can impact results as well as necessary steps to ensure replicability. Finally, we discuss forward-looking best practices that are likely to gain importance as GenAI evolves.
APT or “AIPT”? The Surprising Dominance of Large Factor Models. . Working Paper 33012. DOI 10.3386/w33012. Issue Date We introduce artificial intelligence pricing theory (AIPT). In contrast with the APT’s foundational assumption of a low dimensional factor structure in returns, the AIPT conjectures that returns are driven by a large number of factors. We first verify this conjecture empirically and show that nonlinear models with an exorbitant number of factors (many more than the number of training observations or base assets) are far more successful in describing the out-of-sample behavior of asset returns than simpler standard models. We then theoretically characterize the behavior of large factor pricing models, from which we show that the AIPT’s “many factors” conjecture faithfully explains our empirical findings, while the APT’s “few factors” conjecture is contradicted by the data.
Panigrahi, Ashok, Artificial Intelligence (AI) and Management Accountants (MA) – Future of AIMA Model (September 25, 2024). WIRC BULLETIN OF ICAI, SEPTEMBER 2024, PAGES 8-10, VOL. – 52, ISSUE – 9, ISSN 2456-4982, Available at SSRN: https://ssrn.com/abstract= Today Artificial Intelligence (AI) is a buzzing topic in several contemporary interdisciplinary studies more specifically in accounting and finance. This disruptive technology radically changed the role played by accounting professionals, reducing routine tasks and enhancing their strategic role in the company. With respect to managerial accounting activities, it is believed that AI has the immense ability to help management accountants in their work by helping them to work at a faster rate with more accuracy by saving time for more strategic thinking. It is said that in future the key areas where AI can be used by management accountants are in strategic decision-making, forecasting and budgeting, predictions about future trends and patterns, detect fraud and errors in financial records, and preventing financial frauds and losses so that the company’s reputation can be protected. Hence it is high time for the Management Accountant to embrace the beauty of Artificial Intelligence and leverage its tools to make the best use of it for their profession.
NGOs/IGOs:
The adoption of artificial intelligence in the financial sector presents significant opportunities for efficiency and value creation, but it also introduces potential risks that must be addressed.