ArticlesLaw FirmsOrganizations
Follow

Securities Regulation Daily Wrap Up, AI NEWS—Groups press CFTC on risks of AI in financial markets, caution on overregulation, (Apr 30, 2024)

Organizations Mentioned:Better Markets | Public Citizen

By Lene Powell, J.D.

Consumer groups pointed to risks from opaque systems operating at lightning speed without human intervention, while industry groups said the CFTC’s principles-based regulations have managed changes in technology before.

Accelerated use of ...

By Lene Powell, J.D.

Consumer groups pointed to risks from opaque systems operating at lightning speed without human intervention, while industry groups said the CFTC’s principles-based regulations have managed changes in technology before.

Accelerated use of “black box” artificial intelligence in financial markets could cause oversight challenges and market volatility, consumer groups warned in a comment letter to the CFTC. AI could also increase “herding” behavior and cybersecurity risk due to market dominance by a small number of AI technology companies. Better Markets exhorted the CFTC to address AI risks with “strong, targeted rules, aggressive enforcement, and ample expertise and resources.”

But industry groups FIA, SIFMA, and the U.S. Chamber of Commerce urged the CFTC to exercise regulatory restraint, saying that financial markets have been using AI for some time and existing regulatory frameworks have successfully adapted to technological change in the past. The associations asked the CFTC to take a principles-based approach and engage further with market participants.

The feedback was in response to a CFTC request for comment on AI use and risks in CFTC-regulated markets. The CFTC is collecting information as part of a broader staff effort to monitor the adoption of AI in CFTC-regulated markets, including machine learning and other uses of automation.

The CFTC’s request followed an AI-related 2023 Executive Order issued by the White House, which encouraged federal agencies to consider using their authorities to protect American consumers from fraud, discrimination and threats to privacy and to address other risks that may arise from the use of AI.

“Black box.” According to Public Citizen, AI technology is a “black box” in which what happens within the computer model is not easily decipherable, explainable, or replicable by the model’s developers or users. This presents intrinsic challenges to both market participants and regulators in managing or setting limits on the use of AI with respect to accountability, governance, explainability, and other principles.

This lack of transparency is compounded because market participants are not providing much granular detail on how they are actually using AI, said Public Citizen. This might be due to hesitance to reveal proprietary information, a lack of standard definitions that apply across firms, hybrid use of machine and human-driven trading, or the relative lack of use cases or substantive information on use cases.

To provide transparency, Public Citizen believes that market participates that deploy AI for trading must be able to explain the nexus between inputs and outputs, between data entered and trades executed. Crucially, this must be replicable—outputs must be identical given the same inputs.

Volatility and manipulation. Opaque AI tools like sophisticated algorithms ungoverned by human intervention can cause market risks. Public Citizen pointed to the 2010 Flash Crash in which the Dow Jones Industrial Average fell 600 points, only to recover this loss in twenty minutes. According to the group, numerous studies have been conducted but 14 years later there is still no authoritative, unquestioned explanation of this market event and the role of computer driven high-frequency trading.

“[L]ayer in AI, where HFT decisions may be even less subject to post-event scrutiny, and the regulators will have even less ability to understand the root problem,” Public Citizen wrote.

Better Markets similarly warned of AI-related systemic risks to market integrity, including manipulation and distortion of commodity pricing.

Market concentration. Another concern is dominance by a few large technology providers. Because AI requires prodigious computer resources and there are such benefits to scale, AI technology lends itself to centralization. This is of special concern because financial markets already tend toward concentration. Accelerated use of AI technology in CFTC-regulated markets may further consolidate and concentrate these markets, which could harm smaller actors, said Public Citizen.

Public Citizen quoted Harvard Business School Professor Mihir A. Desai on the risk of concentration.

“Where AI has been pivotal (i.e., in financial markets), scale and speed appear to be the critical determinants of success. When technology and data come to dominate, winners keep winning and the ability to invest in technology and data is the key differentiator,” said Desai.

Regulatory restraint. Industry groups acknowledged that AI may bring new risks to financial markets, but believe this can be managed within existing regulatory and governance frameworks.

According to FIA, before acting to regulate AI use cases and/or perceived risks, the CFTC should consider its existing rules and guidance. Because the CFTC generally uses principles-based framework focused on outcome and use cases, rather than specific underlying technologies, it is likely CFTC’s existing rule sets already address perceived risks.

For example, when contemplating the use of AI in records management, FIA believes the CFTC should consider its existing recordkeeping requirements and whether or not there is a need to update them.

“[A]ny consideration of existing rules or other guidance should approach the issue from the perspective of the use-case, such as AI used to transcribe audio, within the context of existing rules relevant to that function, but not look to regulate the technology of AI itself,” wrote FIA.

According to FIA, the rapid evolution of technology would likely render any AI-specific regulation out of date in the near future. For example, the CFTC updated its recordkeeping provisions in 2017 to remove references to technologies like microfiche, a method of retaining documents that market participants had long since abandoned.

Avoid defining AI. FIA cautioned that any attempt to properly define AI would be very challenging and require considerable resources. A comprehensive definition that differentiates AI from other technology would be extremely difficult to develop, said FIA.

FIA pointed to the cautionary tale of proposed CFTC rules on automated trading, which were ultimately abandoned. Instead, the CFTC used a principles-based approach focused on key risks.

The U.S. Chamber of Commerce also warned against defining AI.

“A definition that is too broad risks encompassing other technologies that do not have the same risk profile as certain AI applications. Yet, a definition that is too narrow will likely become obsolete over time as AI continues to develop as a technology,” the Chamber wrote.

SIFMA suggested that if the CFTC does consider defining AI, it should follow a broadly accepted definition of AI developed by a standard-setting body, rather than creating its own definition.

Engage with market participants. FIA, SIFMA, and the Chamber recommended that CFTC further engage with market participants and the public, for example through roundtables, before considering new rules or guidance.

MainStory: TopStory AINews CFTCNews ExchangesMarketRegulation FinancialIntermediaries InvestorEducation