Cooperation and Compliance: Navigating Artificial Intelligence at the Securities Enforcement Forum
On May 23, 2024, the Securities Enforcement Forum West debuted its first-ever panel on the impact of artificial intelligence (“AI”) on securities enforcement, regulation, compliance, and practice, signaling an increased focus on the fast-evolving technology.
The conference brought together Securities Exchange Commission (“SEC”) and Department of Justice (“DOJ”) officials, securities enforcement and white-collar attorneys, and other professionals in the field to Silicon Valley to hear Q&A sessions with SEC Enforcement Director Gurbir Grewal, U.S. Attorney Ismail Ramsey, SEC Regional Directors Kate Zoladz and Monique Winkler, and others. The panelists covered a variety of topics, including trends in SEC and DOJ enforcement investigations and notable actions from the past year. Chief among the highly topical panels was the conference’s first Q&A discussion on the impact of AI on securities enforcement, regulation, compliance, and practice.
AI Presents Old and New Benefits for Regulators
“AI is not a new phenomenon to regulators,” explained the panelists in the afternoon session on AI. The SEC has long employed AI and other data analytic tools to predict rule violations and detect potential investor misconduct. One such example is the Enforcement Division’s use of the Advanced Relational Trading Enforcement Metrics Investigation System (“ARTEMIS”) to identify suspicious trading activities among traders from its database of over 6 billion equities and options trading records. As we have previously reported, the efficient technology has already boosted the investigative efforts of the Enforcement Division’s Market Abuse Unit’s (“MAU”) Analysis and Detection Center, including by successfully identifying nine individuals from three separate insider trading schemes. The three insider trading charges were brought in a single day, collectively yielding $6.8 million in ill-gotten gains.
Another key technological development of the SEC’s Division of Enforcement is the use of risk-based data analytics through its Earnings Per Share (EPS) Initiative to identify potential manipulators of publicly disclosed earnings per share data. Indeed, a look at SEC’s 2023 enforcement activity shows an increase in the number of enforcement actions involving issuer reporting or accounting and auditing issues since the EPS Initiative’s launch in 2018. With the continued sophistication of this rapidly developing area of technology, there is no doubt that the use of AI will continue to enhance the SEC’s market surveillance and enforcement efforts in the future. According to Enforcement Director Grewal, given “our improved use of data analytics,” and the successes of the Commission’s other initiatives, “it’s really no longer a question of if we’ll find out about a violation, but often when.”
AI Disclosure Obligations for Companies
While the benefits and efficiencies of AI are undeniable, there are challenges and potential missteps associated with the increased use of the technology by regulated entities, including public companies and private companies raising investor funds. At the panel, the SEC staff discussed SEC disclosure requirements and warned against overstating a firm’s AI capabilities and understating AI-related risks to the public.
Akin to the practice of “greenwashing,” “AI washing” is the marketing tactic of exaggerating a firm’s purported use of AI to generate buzz among investors. The panelists highlighted two recent first-of-their-kind settled enforcement actions against investment advisors that engaged in AI washing by making false and misleading statements about their purported use of the technology. In the first action, filed on March 18, 2024, the SEC alleged that from at least August 2019 to August 2023, Delphia (USA) Inc. falsely represented in its investor brochures, in a press release, and on its website that it used AI and machine learning to analyze its retail clients’ spending and social media data to inform its investment advice. Among its false and misleading claims, Delphia advertised itself as “the first investment adviser to convert personal data into a renewable source of investable capital . . . that will allow consumers to invest in the stock market using their personal data.” In reality, the SEC discovered that Delphia, in fact, had not developed these represented capabilities. The second action, also filed on March 18, 2024, alleged that Global Predictions, Inc., also a registered investor advisor, made similar false and misleading statements regarding its AI use on its website and social media accounts in 2023. For example, Global Predictions falsely claimed to be the “first regulated AI financial advisor” that produced “AI-driven forecasts.” Without admitting or denying the SEC’s findings, Delphia and Global Predictions consented to the entry of orders and agreed to be censured, to cease and desist from further violations, and to pay civil penalties of $225,000 and $175,000, respectively.
The Forum’s AI panel also cautioned against under-disclosing AI-related risks. In particular, companies should monitor and accurately disclose any material risks AI poses to its operations and competition, ensuring that the public is rightfully and adequately informed. Previous statements from the SEC underscore the Commission’s long-standing emphasis on accurate and transparent public disclosures concerning AI use. As SEC Chair Gary Gensler stated in a February 2024 speech, any “[c]laims about [AI] prospects should have a reasonable basis,” and any disclosures about material risks should be “particularized to the company, not from boilerplate language.”
Takeaways
Taking the time and care to understand the vast possibilities and limitations of AI before incorporating the technology into a business model is an important first step of effective compliance and practice. The SEC has repeatedly emphasized its focus on public disclosures concerning AI, and companies must strike the delicate balance between highlighting the benefits of its use while remaining transparent about any limitations and risks. Companies should ensure they work closely with counsel to carefully scrutinize AI marketing, avoid boilerplate statements in public filings, and accurately disclose material information to the public and regulators.
Key Contacts
Related Insights
- InsightNovember 19, 2024
- Event RecapNovember 19, 2024Video
This information is provided by Vinson & Elkins LLP for educational and informational purposes only and is not intended, nor should it be construed, as legal advice.