Blog | 10/15/2024

Understanding the Global Impact of the European Artificial Intelligence Act

Team Contact: John Rondini

Share

The European Union’s Artificial Intelligence AI Act (AI Act) is significant legislation that will affect businesses using AI technologies. As in-house counsel, especially if you are unfamiliar with artificial intelligence, it’s crucial to understand how this regulation could impact your company’s legal responsibilities, business practices, and intellectual property.

What is the AI Act?

The AI Act regulates the use of AI in the European Union (EU). It applies to any company that offers AI products or services in the EU, even if that company is based elsewhere, including the United States. If your company develops, sells, or uses AI systems that could reach European customers, you should understand how this law may apply to your organization.

The AI Act aims to ensure that AI technologies are safe and respect fundamental rights, such as privacy and non-discrimination. It does this by classifying AI systems into different categories based on the risk they pose and applying specific rules depending on their classification.

Why is the AI Act Important to Understand?

For in-house legal teams at U.S. companies, the European Artificial Intelligence Act (AI Act) will have significant implications, much like the EU’s General Data Protection Regulation (GDPR). Despite being an EU regulation, its territorial scope extends beyond Europe, applying to any company—inside or outside the EU—that develops, deploys, or offers AI systems that impact the European market.

The AI Act sets out harmonized rules for placing AI on the market and governing its use. It requires companies to meet certain obligations depending on the level of risk posed by their AI applications. High-risk AI systems will face stricter requirements, including risk assessments, transparency measures, and continuous monitoring.

The AI Act will require compliance with these rules for U.S. companies operating in the EU or offering products and services that utilize AI. Understanding the risk levels associated with various AI applications will become necessary. The Act also introduces significant financial penalties for non-compliance, underscoring the need for early attention to these requirements.

The AI Act is likely to become a global standard in AI regulation, so in-house legal teams should begin preparing for its implementation. Such preparation includes advising on risk assessments, updating internal policies, and ensuring transparency and accountability in AI deployment to mitigate the risk of penalties and align with the emerging regulatory landscape in Europe. Early preparation will be vital in staying ahead of this new wave of regulation.

What are the Key Provisions I Need to Know?

For U.S. in-house legal teams, understanding the European Artificial Intelligence Act (AI Act) is critical, as its reach extends well beyond the borders of the EU. Here’s a breakdown of the key components of the Act that will likely affect your organization:

The AI Act classifies AI systems into three varying risk categories based on their risk level. Compliance timing and penalties vary depending on which categories the AI system may fall within.

  • Prohibited AI: These AI systems are considered too dangerous and banned entirely. Examples include AI that manipulates people in a harmful way or social scoring that may cause unfavorable treatment to a person.
  • High-Risk AI: These systems are allowed but face additional regulations. They include AI used in areas like critical infrastructure (e.g., power grids), education, employment (e.g., resume screening), law enforcement, and healthcare. If your company develops or uses high-risk AI, you should ensure that these systems meet strict safety, transparency, and accountability standards.
  • Limited-Risk AI: These systems pose fewer risks and, therefore, have lighter regulations. For instance, you may only need to inform users when they are interacting with an AI system (e.g., chatbots).
  • Minimal-Risk AI: Older AI applications (e.g., spam filters or AI-enabled video games) are not regulated under the AI Act as they are already subject to existing legislation. However, updates or revisions to a system which
    incorporates newer generative AI may toggle a system to compliance requirements under the AI Act.

The AI Act has a wide scope, applying to multiple types of entities. Again, depending on the AI system being deployed, companies outside the EU may fall within one of the below and, therefore, be subject to the provisions of the AI Act.

  • Providers: Companies that develop AI systems.
  • Deployers: Companies that use AI in their operations (e.g., a bank using AI for credit scoring).
  • Manufacturers and Distributors: Entities that bring AI products to the EU market, whether by production or importation.

Regarding timing, certain portions of the AI Act require compliance beginning in February 2025, and full compliance will occur in August 2027. For example, prohibited and high-risk AI systems will require compliance deadlines starting in 2025.

Failure to comply with the AI Act can result in severe financial penalties, including fines of up to €35 million or 7% of your company’s total global revenue, whichever is higher. The penalties associated with the AI Act highlight the importance of understanding and adhering to the regulations, even if your company is based outside the EU but provides AI products or services to EU customers.

In-house legal teams should prioritize understanding these regulations and begin shaping compliance strategies. Proactively addressing these requirements will mitigate legal risks and ensure your company’s operations remain unaffected as the AI Act comes into full force.

What Does This Mean for Your Company?

As in-house counsel, there are several key areas to focus on:

  • Regulatory Compliance: Ensure your company’s AI products and services comply with the new law. The review could involve conducting risk assessments, documenting AI systems, and ensuring transparency. High-risk AI systems will need to undergo conformity assessments to verify that they meet the required standards before they can be placed on the market.
  • Data and Intellectual Property: The AI Act has important implications for how data is used in AI systems. AI systems often rely on vast datasets, including personal data or proprietary information. You’ll need to ensure that your company’s use of data complies with the AI Act’s requirements on transparency, fairness, and non-discrimination. Additionally, you may need to carefully manage intellectual property, particularly if the AI Act requires disclosure of how your AI systems function.
  • Contracts and Licensing: If your company licenses or purchases AI systems from other providers, the contracts should account for compliance with the AI Act. Ensure that partners and vendors meet the regulatory requirements and review your company’s liability exposure for AI-related risks..

What Should I Do To Prepare for the AI Act?

In summary, the European AI Act brings significant changes for companies that develop or use AI technologies. It’s essential to begin preparing for compliance now by:

  • Conduct an audit of your AI systems to determine their risk classification.
  • Review current internal procedures and either update or create policies to meet the new regulatory requirements.
  • Collaborate with your IT and AI development teams to ensure compliance without compromising innovation.

By understanding the AI Act and its potential impact on your company’s operations, you’ll be better equipped to protect your organization and ensure that it continues to innovate while complying with this important legislation.

 

Keep Reading