Responsible AI in Financial Markets: CFTC Report Summary

Small humanoid robots working

Artificial Intelligence (AI) found a permanent place in the financial markets and fintech once its benefits were discovered. Despite the benefits, existing risks are mainly centered around responsibility, accountability, and supervision.

Are fintech AI applications responsible enough?

Image: Gerard Siderius (Unsplash)

Responsible AI

Are fintech AI applications responsible enough? 

IBM defines responsible artificial intelligence (AI) as a set of principles that help guide the design, development, deployment, and use of AI—building trust in AI solutions that have the potential to empower organizations and their stakeholders. “Responsible AI involves the consideration of a broader societal impact of AI systems and the measures required to align these technologies with stakeholder values, legal standards, and ethical principles. Responsible AI aims to embed such ethical principles into AI applications and workflows to mitigate risks and negative outcomes associated with the use of AI while maximizing positive outcomes.” Read more here

A new report prepared by the Technology Advisory Committee of the Commodity Futures Trading Commission (CFTC) explores the question of responsible AI in the context of financial services in more depth. It examines definitions and frameworks for responsible AI use, current regulatory environments, and the potential applications and implications of AI within CFTC-regulated entities. 

Let’s take a look.

CFTC AI Report Details

The Responsible AI Report (Responsible Artificial Intelligence in Financial Markets: Opportunities, Risks &Recommendations: A Report of the Subcommittee on Emerging and Evolving Technologies, Technology Advisory Committee of the U.S. Commodity Futures Trading Commission) was published on May 2, 2024.

The report is commissioned by Christy Goldsmith Romero and sponsored by Scott W . Lee, Senior Counsel and Policy Advisor, Office of Commissioner Goldsmith Romero, CFTC Dr. Nicol Turner Lee, Co-Chair, Subcommittee on Emerging and Evolving Technologies, TAC Todd Smith, Co-Chair, Sub-committee on Emerging and Evolving Technologies, TAC Anthony Biagioli, Designated Federal Officer, CFTC.

The report describes AI as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments, abstract such perceptions into models through analysis in an automated manner, and use model inference to formulate options for information or action.

CFTC AI Report Summary

The report highlights the evolving nature of AI technology and its rapid adoption across the financial sector, emphasizing both the opportunities and risks this presents. The potential of AI to improve processes such as fraud detection, customer relationship management, and risk assessment is counterbalanced by challenges, including ensuring fairness, transparency, and accountability in AI-driven decisions.

Accordingly, 99% of financial services leaders reported that their firms were deploying AI in some capacity in 2023, and these emerging and evolving technologies have the potential to support a variety of use cases related to financial services.

To address the AI-driven challenges in the financial sector, namely market instability, privacy concerns, governance and supervision, the study suggests a structured approach to integrating AI within the regulatory framework of the CFTC. This includes enhancing transparency, fostering discussions about responsible AI use, and creating a detailed AI risk management framework aligned with guidelines from the National Institute of Standards and Technology (NIST).

The report calls for active engagement from all stakeholders within the CFTC’s purview to ensure that the deployment of AI technologies advances with a keen awareness of both the technological potential and the ethical implications. This engagement would include roundtable discussions to clarify AI’s role and impact, the establishment of clearer regulatory guidelines, and a concerted effort to enhance the internal capabilities of CFTC staff to navigate the complexities of AI.

To move forward responsibly, the report emphasizes the need for an inventory of existing AI applications and a gap analysis to assess and mitigate risks. It advocates for cooperative alignment with other federal agencies to unify AI policy approaches and enhance overall regulatory efficacy. The ultimate goal is to foster a regulatory environment that supports innovation while protecting market integrity and consumer interests through careful oversight and proactive governance of AI technologies.

Specific Responsible AI Recommendations

The report outlines a five-step program to develop a framework that fosters safe, trustworthy, and responsible AI systems.

  1. Hosting a public roundtable discussion and CFTC staff should directly engage in outreach with CFTC-registered entities to seek guidance and gain additional insights into the business functions and types of AI technologies most prevalent within the sector.

  2. Defining and adopting an AI Risk Management Framework (RMF) for the sector, in accordance with the guidelines and governance aspects of the National Institute of Standards and Technology (NIST), to assess the efficiency of AI models and potential consumer harms as they apply to regulated entities, including but not limited to governance issues.

  3. Creating an inventory of existing regulations related to AI in the sector and using it to develop a gap analysis of the potential risks associated with AI systems to determine compliance relative to further opportunities for dialogue on their relevancy and potential clarifying staff guidance or potential rulemaking.

  4. Gathering and establishing a process to gain alignment of their AI policies and practices with other federal agencies, including the SEC, Treasury, and other agencies interested in the financial stability of markets.

  5. Engaging staff as both observers and potential participants in ongoing domestic and international dialogues around AI and, where possible, establishing budget supplements to build the internal capacity of agency professionals around necessary technical expertise to support the agency’s endeavors in emerging and evolving technologies.

“Without appropriate industry engagement and relevant guardrails (some of which have been outlined in existing national policies), potential vulnerabilities from using AI applications and tools within and outside the CFTC could erode public trust in financial markets, services, and products.” - CFTC

You can download the report using this link.

Previous
Previous

Corporate Leadership: How To Grow Into a Leadership Role?

Next
Next

How to Open Doors with Lobbying as a Fintech Company?