White House Issues First-of-its-Kind Executive Order on AI
On October 30, 2023, President Biden issued an Executive Order (“Order”) that drastically increased the U.S. government’s engagement with artificial intelligence (“AI”). The sweeping Order touches on everything from bias in the workplace to international relations. Organizations in all industries should take steps to understand the Order and the trend toward increased regulation of AI that it signals.
Standards Relevant to the Business Community
The Order affects the energy industry. The Department of Energy along with a number of other climate- and energy-focused agencies, most notably the Federal Energy Regulatory Commission, are charged with issuing a report detailing the ways AI can improve electric system planning, permitting, investment, and operations. The Order also calls for increased public-private partnerships and industry-focused coordination efforts to better utilize new applications of AI to increase preparedness for climate related risks, decrease permitting delays for clean energy and renewable resources, and enhance grid reliability and resiliency.
Per the Order, companies developing AI systems that pose a serious national security, national economic, or national public health and safety risk must notify the government when they train their models and must provide the results of any “red team” safety testing. In addition, the National Institute of Standards and Technology will develop new standards for red team testing of AI systems. The Department of Homeland Security, in turn, will establish the AI Safety and Security Board and apply those standards to critical infrastructure projects.
Healthcare and life sciences companies face implications, too. The Order advances the use of AI in developing pharmaceuticals and directs the Department of Health and Human Services to implement a program to receive reports and to remedy harmful or unsafe healthcare practices using AI. Organizations with federally funded life-science projects must comply with new standards developed by the agencies funding their projects, which guard against the risks of using AI to engineer dangerous biological materials.
The Biden administration will also issue guidance to landlords and federal contractors to avoid AI’s potential to exacerbate discrimination through, e.g., algorithmic discrimination. The Order signals — through the coordination of the Department of Justice and other federal civil rights offices — the Biden administration’s intent to increase prosecution of civil rights violations related to AI.
In the workplace, the Order directs the development of principles and best practices to address a variety of labor concerns, including bias and discrimination, job displacement, data collection, labor standards, and workplace equity, health, and safety. Employers should keep these principles in mind as they craft their workplace policies and procedures.
Cybersecurity and Data Privacy Implications
The ability for bad actors to use AI to perpetrate fraud, data breaches, and other deceptive acts is a major concern for both public and private actors. To combat this danger, the Order directs the Department of Commerce to develop best practices for detecting AI-generated content and for authenticating official government content.
The Order also addresses several areas of concern related to data privacy. For example, it funds a “Research Coordination Network” that will work alongside the National Science Foundation to accelerate the development of privacy-preserving techniques and technologies.
Intellectual Property Implications
Another purpose of the Order is to protect inventors and creators. Pursuant to the Order, the government will issue guidance to patent examiners and applicants regarding the use of AI in the inventive process. Later guidance will address other, currently unspecified issues at the intersection of AI and intellectual property, which could include questions of patent eligibility. The Order also directs the Under Secretary of Commerce for Intellectual Property, the Director of the United States Patent and Trademark Office, and the Director of the United States Copyright Office to issue recommendations on future executive orders relating to copyright and AI.
To assist developers of AI, the Order calls for the creation of training, analysis, and evaluation programs to mitigate AI-related intellectual property risks. This program will provide guidance and other resources to private sector actors. Importantly, it will facilitate information sharing between AI developers and law enforcement personnel to identify incidents, inform stakeholders of legal requirements, and evaluate AI systems for violations.
Antitrust Implications
Finally, the Order addresses competition issues raised by the recent surge in AI usage across industries. Importantly, the Order “encourag[es] the Federal Trade Commission to exercise its authorities” in the area of competition and AI. The FTC has recently emphasized its scrutiny of competition in AI development and deployment through multiple AI-focused panels, as well as by publishing the blog post “Generative AI Raises Competition Concerns.” The FTC has a variety of tools at its disposal to regulate competition in AI markets including investigations of mergers and acquisitions that implicate competition for the development or deployment of AI tools, as well as investigations related to one firm potentially foreclosing its rivals from accessing essential inputs for AI, such as data, computing power, hardware, and foundation models. Additionally, President Biden details the administration’s plans to create the “National AI Research Resource” which would provide AI researchers access to key “resources and data” with the goal of lowering barriers to entry and increasing competition in AI. The Order reflects the administration’s focus on regulating both the conduct of firms engaging in AI development and the building blocks that are important for that development.
What This Means for You
Although the standards and directives outlined in the Order are still in their infancy, their implications are far-reaching. Increased funding in the AI sector could lead to a new wave of opportunities for growth and productivity, but heightened regulation could lead to complex compliance challenges. In light of these circumstances, some practical tips include:
- Increasing employee training and simulating realistic phishing exercises to reduce the risk of AI-powered intrusion;
- Implementing and continually enhancing multilayer defense strategies;
- Evaluating the purposes of AI tools and weighing the risks and benefits of such tools in achieving those purposes;
- Ensuring that precautions are taken to protect trade secret and confidential information, notwithstanding their input in, or derivation from, various AI models; and
- Checking your cyber insurance policies to ensure compliance with any conditions of coverage.
V&E assists clients in identifying, managing, and mitigating risks associated with emerging technologies, from early planning and assessment to compliance, managing incident response, and resulting litigation.
Related Insights
- CLE EventWebcastDecember 5, 2024CLE Credit
This information is provided by Vinson & Elkins LLP for educational and informational purposes only and is not intended, nor should it be construed, as legal advice.