Articles | Feb 02, 2024

EU Artificial Intelligence Act: What Is It and How Does It Compare to U.S. AI Laws?

New York Law Journal

Read the Article
Globe digital renderings

AI Act aimed at governing the development and use of AI in EU member states and other countries

Generative artificial intelligence (GenAI) has increasingly captivated public interest and has resulted in various efforts to establish rules and regulations targeting the development and use of such technology.

This article provides an overview of the procedural history and key provisions of the European Union’s (EU) Artificial Intelligence Act (AI Act), a landmark regulation aimed at governing the development and use of AI in EU member states and other countries under specific circumstances. It will also examine the potential impact and interaction of the AI Act with the General Data Protection Regulation (GDPR), among other laws, rules and regulations.

Procedural History of the AI Act

The European legislative process generally begins with the European Commission (Commission) proposing a regulation, which is reviewed by the European Council (Council) and Parliament. The Council is composed of heads of government of EU member states, along with other representatives, while Parliament members are elected by EU citizens. The Council and Parliament meet to reconcile their respective versions, with the Commission presiding over deliberations, a process known as the trilogue.

Subsequently, the Council’s Committee of Permanent Representatives ratifies the regulation and submits it for adoption by the Council. If adopted, the regulation becomes effective after publication in the official journal of the EU. Regulations apply to the European Economic Area, which includes EU countries, as well as Iceland, Liechtenstein and Norway (hereinafter, EU for ease of reference).

On April 21, 2021, the Commission proposed a regulation, known as The AI Act, which was sent to the Council and Parliament. The Council reviewed and discussed the regulation in committee meetings, then adopted its version on Dec. 6, 2022. Simultaneously, Parliament reviewed and discussed the regulation in its own committee meetings, then adopted its version on June 14, 2023, with substantial amendments.

In contrast to the Commission and Council, Parliament proposed to regulate foundation model providers, which use large language models to self-supervise and learn from publicly available data and generate music and art, among other content.

The trilogue began in June 2023, but negotiations stalled when France, Germany and Italy sought to eliminate Parliament’s proposal to regulate these providers, instead proposing that providers self-regulate through codes of conduct. Negotiations resumed and following three-day marathon talks, on Dec. 9, 2023, Council and Parliament reached a provisional political agreement. Notably, foundation models were excluded from the most recent draft.

The Council aimed to finalize adoption on Feb. 2, 2024 but the AI Act is likely to undergo further revisions. Upon publication, the regulation generally takes effect after 24 months.

Key Provisions of the AI Act

The AI Act marks the first comprehensive attempt to regulate the development and use of an AI system. An AI system is defined by the AI Act as a “machine-based system designed to operate with varying levels of autonomy,” (AI Act, Art. 2(5g)) and can generate predictions, content, recommendations or decisions.

While an AI model is an integral component of an AI system, it does not constitute an AI system on its own. A general-purpose AI model is one that, through self-supervision can be trained with a substantial amount of data, execute a wide range of tasks and be integrated into other systems or applications. Generative AI is an example of a general-purpose AI model (Recital 60(c)).

The AI Act applies to:

  • Both providers and deployers of an AI system regardless of whether they are established in the EU or in a third country, where their output is used within the EU. A provider, be it an individual or legal entity, public authority, agency or other body, is involved in the creation, provision or distribution of an AI system or a general purpose AI model, for commercial activity. ( 2(5g)). A deployer, meanwhile, uses an AI system.
  • Importers and distributors of AI systems; manufacturers and users of AI systems incorporating it with their name or trademark; representatives of providers which are not established in the EU; and affected persons in the EU.

The AI Act has extraterritorial reach, which means that U.S.-based providers, provider representatives, and deployers may be subject to the AI Act if they perform activities within the scope of EU law.

AI systems used exclusively for military, defense or national security are excluded from the AI Act’s scope (Art. 2(3)).

Prohibited AI Practices and High-Risk AI Systems

The AI Act defines prohibited AI practices (Art. 5(1)) and high-risk AI systems, with heightened compliance requirements for high-risk systems (Art. 6(1), 8-15).

  • Prohibited AI practices include, but are not limited to, AI manipulation or distortion of a person’s behavior (Art. 5(1)(a)).
  • High-risk AI systems include, but are not limited to, AI used for certain biometrics, employment decisions and creditworthiness evaluation (Art. 6, Annex II-III).

Providers of high-risk AI systems must comply with certain requirements, including implementing a risk management system and data governance program (Art. 9-10), and requiring deployers to share information on the purpose of the system and foreseeable risks, among others (Art. 13).

Fines for Non-Compliance and Implementing Offices

Non-compliance can result in potential fines up to 35 million euros or 7% of gross global earnings, whichever is higher (Art. 71(3)). Certain other violations could result in administrative fines potentially up to 15 million euros or 3% of gross global earnings, whichever is higher (Art. 71(4)).

The AI Act calls for the establishment of a European AI Board (Board) and an AI Office.

The Board will be composed of member state representatives to assist the Commission and member states in facilitating implementation of the AI Act (Art. 56, 58). Meanwhile, the Board will collaborate with the AI Office, which will sit within the Commission to contribute to the implementation and oversight of AI systems and AI governance (Recital 75(a)).

The AI Act and the GDPR

The AI Act expressly states that it does not seek to affect the application of existing EU law, recognizing the importance of safeguarding the fundamental right to protection of personal data, particularly as set forth in the GDPR and EU Law Enforcement Directive (EU Laws) (Recital 5a). The AI Act does not obviate the obligations of providers or deployers of AI systems acting as a data controller or processor nor does it guarantee compliance with such EU laws.

The AI Act References the GDPR

Although the GDPR does not explicitly mention AI, several provisions may be relevant to the processing of personal data in the context of AI, some of which are expressly acknowledged in the text of the AI Act.

Article 10(5) of the AI Act provides a condition of substantial public interest for the processing of special categories of data under the GDPR. Special categories of data is defined in Article 9(1) of the GDPR as personal data revealing racial or ethnic origin, political opinions, and religious beliefs, among others. This condition under the AI Act is intended to detect and correct bias in high-risk AI systems.

The AI Act also specifies a condition for when data protection impact assessments (DPIAs) should be conducted by deployers pursuant to the GDPR. Specifically, Article 29(6) of the AI Act requires deployers of high-risk AI systems to carry out a DPIA as defined in Article 35(1) of the GDPR.

U.S. AI Rules, Laws and Regulations Compared With AI Act

Unlike the EU’s comprehensive AI regulation, U.S. AI regulation consists of a patchwork of rules, legislation and executive orders.

To date, over 25 states have introduced AI laws. Some states created government task forces to investigate uses of AI (e.g., Hawaii senate resolution urging Congress to discuss the benefits and risks of AI [SR 123/SCR 179)]; Connecticut senate bill establishing an office of AI to catalogue AI use and develop an AI Bill of Rights [S 1103]).

Other laws aim to protect consumers and/or employees (e.g., New York’s Notice of Electronic Monitoring Law mandates employers to provide prior notice of electronic monitoring to candidates [LL-144]; Illinois’ AI Video Interview Act requires prior written notice for AI video analysis [820 ILCS 42]).

President Biden also issued an executive order on AI safety and security (to protect Americans from risks while catalyzing innovation and competition) and a Blueprint for an AI Bill of Rights (principles to “guide the design, use and deployment of automated systems to protect the American public”).

Courts in Colorado, Illinois, New York, Oklahoma, Pennsylvania and Texas, among others, have issued standing orders or implemented local rules governing the use of AI that range from outright prohibition of the use of AI to verification of information generated by AI and/or disclosure of certain information regarding attorneys’ use of AI.

Looking ahead, AI will increasingly be regulated, even as the technology evolves. Enactment of the AI Act may prove to be a watershed moment in the same way that the passage of the GDPR was, because it became a template for similar, comprehensive data protection laws that continue to proliferate. This may also lead to reevaluation of existing policies and instigate U.S.-EU negotiations, given the extraterritorial reach of the AI Act.

Anna Mercado Clark is a partner and chief information security officer at Phillips Lytle, as well as co-leader of the firm’s Technology Industry team. She can be reached at aclark@phillipslytle.com or (212) 508-0466. Paula P. Plaza is an attorney at Phillips Lytle and member of the firm’s Data Privacy and Cybersecurity Industry team. She can be reached at pplaza@phillipslytle.com or (716) 847-8324.

https://phillipslytle.com/wp-content/uploads/2023/12/Technology-Artificial-Intelligence-Mobile.jpg

Artificial Intelligence

Phillips Lytle has assembled a team of skilled attorneys from various disciplines to address the impact of transformative technology. As AI continues to rapidly expand into all areas of business, companies must also keep pace by understanding its implications and adapting their policies and procedures accordingly. The AI Team at Phillips Lytle is part of the firm’s Technology Industry Team, which consists of attorneys who are highly regarded in the marketplace for their expertise in technology.

Learn more about our Artificial Intelligence practice.

Related Insights

View All