DOJ Warns Against Unintelligent Use of Artificial Intelligence
The Department of Justice is stepping up its focus on artificial intelligence (“AI”), with officials warning that harsher penalties could be in store for those who deliberately misuse the technology to commit white collar offenses, such as price fixing and market manipulation.
On March 7, 2024, the Department of Justice’s (“DOJ” or the “Department”) Deputy Attorney General (“DAG”), Lisa Monaco, delivered keynote remarks at the American Bar Association’s annual National Institute on White Collar Crime (the “Institute”). Among other notable topics, Monaco said DOJ is intensifying its focus on AI, warning that individuals and companies could face harsher penalties for intentionally using the technology to commit white-collar crimes, such as price fixing, fraud, or market manipulation.
Monaco said that when evaluating how corporate compliance programs effectively mitigate a company’s risks, DOJ will consider the extent that compliance programs mitigate AI risks, where applicable. To that end, she announced that DOJ’s Criminal Division would now be assessing disruptive technology risks, including those implicated by AI, into its policy for evaluating corporate compliance programs.
DOJ’s policy announcement demonstrates law enforcement officials’ desire to stay in front of the potential harms that could be associated with this rapidly developing technology. Federal prosecutors have long sought increased sentences for criminals whose behavior poses an especially serious risk to victims and the public. DOJ is now applying the same principle to AI. “Where AI is deliberately misused to make a white-collar crime significantly more serious,” Monaco said, “our prosecutors will be seeking stiffer sentences — for individual and corporate defendants alike.”
Monaco’s address is the first time DOJ has publicly noted the evolving intersection of AI and criminal enforcement. U.S. Attorney General Merrick Garland recently said during an on-stage interview at the Institute that the growing use of artificial intelligence “accelerates” the threat of cyberattacks on companies, the government, the military and the general public. And on February 14, 2024, Monaco delivered remarks at Oxford University in which she highlighted the Department’s efforts to combat AI’s threat to national security, as well as President Biden’s executive order on Safe, Secure and Trustworthy AI.
What This Means For You
Companies should stay ahead of this emerging area in DOJ’s white collar criminal enforcement portfolio by developing policies to prohibit and prevent misuse of AI. DOJ will soon consider how well a company manages the risks of AI technology as part of its evaluation of corporate compliance programs. Compliance officers should think through rules and guidelines that can be implemented to restrict how AI could be used by rogue employees to cause potential problems for the company, such as by using price fixing algorithms to fix prices on assets/products based on relevant variables, AI-augmented fraud, and AI-assisted market manipulation that has the potential to plunge markets. We will continue to monitor for developments as DOJ develops further guidance on this issue, and DOJ’s policy announcement is a good reminder to seek the competent counsel whenever questions arise.
Related Insights
- InsightNovember 19, 2024
- CLE EventNovember 19, 2024CLE Credit
This information is provided by Vinson & Elkins LLP for educational and informational purposes only and is not intended, nor should it be construed, as legal advice.