Senate Bill Kills SEC Rule on AI Conflicts of Interest

Paresh Jadhav

SEC

In July, the Securities and Exchange Commission proposed a rule to require broker-dealers and investment advisers using artificial intelligence or predictive data analytics to “eliminate or neutralize” conflicts that arise due to using them – this proposal contends that disclosure alone does not suffice.

Industry stakeholders, however, have voiced concerns over the proposed rule and assert that current causes of action already address its concerns.

The bill’s authors say it’s a response to the SEC’s proposed rule

The aim of the bill, according to its authors, is to prevent the SEC from misusing its new AI rules to regulate fundamental tools used by broker-dealers and investment advisers (or “Firms”). They suggest that proposed definition of covered technology is too wide, potentially forcing firms to revise their business processes, which could prove costly and hamper innovation.

Additionally, they assert that the SEC already possesses the power to address conflicts of interest through existing enforcement actions. “Should a conflict arise in 2024 involving AI firms or not,” they argue, the SEC should take appropriate actions immediately.

Lawmakers are concerned that the SEC’s rulemaking process has gone too far on AI conflicts of interest. According to their proposal on this subject, firms would need to eliminate or neutralize conflicts associated with predictive data analytics and other technologies – this would duplicate requirements found elsewhere such as Regulation Best Interest and fiduciary standard for investment advisers.

The bill’s text

This bill seeks to address potential threats to election integrity and combat misinformation by mandating that AI systems be watermarked so people can verify whether content generated from these systems is truly not generated using artificial intelligence, while also protecting against misuse by third-parties looking to use these artificial systems to steal copyrighted works or personal information from individuals.

The SEC proposal entitled, “Conflicts of Interest Associated With Predictive Data Analytics Used by Broker-Dealers and Investment Advisers,” would require firms to assess whether their use of predictive analytics and other artificial intelligence (AI) technologies might result in conflicts that put their own interests ahead of investors’. They then would need to eliminate or neutralize such conflicts, as well as implement policies, procedures, and record keeping to manage them effectively.

SEC Chair Gensler issued a warning during a public discussion, warning securities industry firms not to engage in “AI-washing,” making false or misleading claims about AI. Such practices could damage investor confidence in financial systems.

SEC

The bill’s sponsors

The proposed rules establish a new cause of action against firms in the securities industry that make misrepresentations or withhold material information regarding their use of AI technologies, and require each firm to establish written policies and procedures detailing how they utilize these technologies to identify, mitigate, or avoid conflicts of interest; keeping records of compliance with them.

Investment advisers and broker-dealers find novel generative AI tools useful in improving compliance systems, improving investment advice processes, monitoring financial crime cases and responding to customer inquiries. Yet Chair Gensler cautioned that such systems could pose risks such as making false assumptions about investors or showing bias toward products from themselves.

He has specifically requested the implementation of new rules to combat AI-washing, with regulators across federal bureaucracy actively exploring this topic. Proposals have emerged such as mandating scorecards and nutrition labels on AI models; compensating content creators whose works are utilized by AI models; and watermarking AI-generated content so it can be identified.

The bill’s impact

Proponents of the bill argue that securities laws already permit the SEC to bring actions against firms whose AI technologies exhibit bias or misstate facts, as evidenced by recent SEC actions involving AI technologies. They cite other recent SEC enforcement actions involving AI technologies.

Under the proposed regulations, firms would need to evaluate any use or reasonable foreseeable potential use of covered technologies and identify any conflicts of interest related to it. Firms would then take measures to minimize or neutralize such conflicts when they find or should reasonably find that certain technologies put their interests above those of investors.

Complying with these requirements would place significant burdens on companies using AI technologies in their business, potentially forcing them to rewrite processes that stifle innovation or delay adopting useful new tools that investors would find beneficial.


Leave a Comment