Agreement on the EU’s AI Act Sets the Stage for Comprehensive Regulations
On December 8, 2023, representatives of the European Council and the European Parliament reached a provisional agreement on the EU’s Artificial Intelligence Act (“AI Act”). Although the final text of the AI Act remains subject to further revision by lawmakers and endorsement by member states’ representatives, the provisional agreement previews the expansive scope of the forthcoming regulations. The AI Act is not expected to take effect until 2026, but stakeholders doing business in the EU should review the provisional guidelines and prepare well in advance.
Prohibited Uses of AI
The AI Act prohibits the use of artificial intelligence (“AI”) systems that pose the greatest potential threats to democracy and the rights of EU citizens. Among others, the AI Act will ban AI applications that engage in untargeted scraping of facial images to create facial recognition databases, enable social scoring based on social behavior and personal characteristics, and create biometric categorization systems that use sensitive characteristics like race, religion, and sexual orientation.
Negotiators agreed, however, on certain limited exceptions for law enforcement. Authorities may, for example, use AI-powered remote biometric identification systems in publicly accessible spaces for law enforcement purposes. But such use is limited to circumstances related to a defined list of crimes and requires prior judicial authorization.
Regulation of High-Risk AI
AI systems that pose a somewhat lesser, but still significant, potential for harm to health, safety, fundamental rights, environment, democracy and the rule of law are classified as “high risk” and are permitted by the AI Act, subject to heightened regulations. Examples of high-risk AI systems include those related to employment, elections, banking, insurance, and critical infrastructure management. In addition to transparency-focused regulations, developers of high-risk AI systems must prepare a “fundamental rights impact assessment” before the system is put into the EU market. Public entities that use high-risk AI systems will be required to register these systems with the EU. Additionally, the AI Act gives EU citizens the right to file complaints to receive explanations about decisions based on high-risk systems that impact their rights.
Requirements for General-Purpose AI Models
General-purpose AI (“GPAI”) models — those that can perform a wide range of distinctive tasks — are also regulated by the AI Act. They are subject to an array of transparency requirements, including producing technical documentation, demonstrating compliance with copyright law, and providing detailed summaries of the content used to train them.
Certain “high-impact” GPAI models — more advanced AI models capable of posing systemic risks — are subject to additional regulations. Developers of these models must conduct evaluations, assess and mitigate risks, conduct adversarial testing, ensure the security of the models from cyber threats, report serious incidents, and describe the models’ energy efficiency.
Enforcement and Scope
The AI Act includes some significant exceptions. It will not cover AI systems used solely for military or defense purposes or only for research and innovation purposes. It will also not apply to people using AI for non-professional reasons.
For covered systems, it will impose penalties based on the level of risk associated with the violation, levying the higher of a fixed fine or a percent of the violator’s annual turnover. The fines amount to €35 million or 7% for using banned AI applications, €15 million or 3% for violating most other obligations, and €7.5 million or 1.5% for supplying incorrect, incomplete, or misleading information to regulatory bodies. But the provisional agreement provides for reduced fees for startups and other smaller enterprises.
The AI Act will also establish an AI Office within the European Commission. This body will oversee the most advanced AI models, set standards, and enforce rules. An AI Board established by the AI Act will coordinate the implementation of the regulations across member states, and it will be advised by a forum of various stakeholders, including industry representatives, academics, and others.
Looking Ahead
Over the coming weeks, lawmakers will finalize the AI Act’s language, including its technical requirements. We at V&E are closely monitoring these developments to help clients stay informed of the AI Act’s proposed requirements. Major technology companies are not the only entities that will be affected by these regulations. In an increasingly globalized world, businesses across sectors will need to ensure that AI systems they use comply with applicable regulations, and when doing business in the EU, that will mean complying with the AI Act.
Related Insights
- CLE EventWebcastDecember 5, 2024CLE Credit
- InsightNovember 22, 2024
- Event RecapNovember 14, 2024
This information is provided by Vinson & Elkins LLP for educational and informational purposes only and is not intended, nor should it be construed, as legal advice.