Blog | 10/25/2024

From State to Global: Comparing U.S. State Laws to the EU’s Comprehensive Approach

Team Contact: John Rondini , Muhammad Siwani

Share

In the past year, there has been quite a bit of legislation enacted regarding Generative AI (GenAI) systems. In early 2024, Utah passed the AI Policy Act, a bill focused on transparency in GenAI use in businesses, requiring clear disclosures in interactions involving AI. Less than a month after Utah’s Act, Colorado enacted its own comprehensive AI regulation (SB 205) which places broader obligations on the developers and deployers of GenAI systems. Just a few months later, California passed AB 2013, a law imposing new compliance requirements on developers of GenAI systems or services. Each of these state laws shares a common goal of promoting transparency, but their approaches—and the challenges they present—differ. Moreover, while these bills mark progress in addressing certain aspects of AI, the state bills still do not appear to be as robust as the EU AI Act which will be fully phased-in over the next few years. 

 First, the Utah AI Policy Act attempts to address the use and disclosure requirements of GenAI during consumer interactions. For individuals who provide services within a “regulated occupation,” the Act imposes certain transparency obligations when GenAI systems interact with customers. For those who violate these transparency obligations, the Act establishes accountability standards, including fines for noncompliance, which may be enforced by the Utah Division of Consumer Protection and Attorney General. While it does not fully address all the policy and legal issues GenAI systems may present, Utah was one of the pioneering states enacting legislation to address the use of GenAI in the context of consumer protection. 

In contrast, Colorado’s SB 205, set to take effect starting in February 2026, introduces comprehensive regulations on the use of “high-risk” AI systems. The bill defines such “high-risk” systems as those that play a significant role in making a “consequential decision” that impacts a consumer’s availability, cost, or terms of a particular service area – e.g., education, healthcare, or insurance. The law requires that when AI is used to make “consequential decisions,” consumers must be notified and given explanations for adverse decisions, along with the right to correct data and appeal. Both “developers” and “deployers” of regulated AI systems must conduct periodic impact assessments and address any instances of “algorithmic discrimination” and report findings to the Attorney General within 90 days​. 

 In a different twist, California’s AB 2013 takes a broader approach than both bills. It requires developers to disclose not only the intellectual property used in training datasets but also whether any personal information, as defined by the California Consumer Privacy Act (CCPA), is being used to train GenAI systems or services. This law, therefore, attempts to extend its regulatory reach to include privacy concerns by requiring companies to maintain detailed documentation and publish summaries that private information was used to train a GenAI system or service. Additionally, the law requires that GenAI systems developed or modified after January 1, 2022, retroactively compile and disclose the training datasets used—a task that could be burdensome for companies with incomplete records. While AB 2013 attempts to balance transparency with privacy and IP protection, it also falls short in areas like AI ethics or broader governance frameworks. 

 While a significant start, these state laws fall short of the more comprehensive scope seen in the EU AI Act. This Act establishes a much more robust regulatory framework that starts with transparency and IP protection. But unlike the U.S. bills, the EU Act includes provisions that address ethical GenAI deployment, risk management, and consumer protection. The EU Act categorizes GenAI systems based on risk, with high-risk systems subject to stringent transparency, oversight, and human intervention requirements. The EU Act also emphasizes accountability in GenAI development that strives to ensure safety, privacy, and fundamental rights are at the core of the design.  

 As with privacy laws, individual states are paving the way regarding GenAI legislation. But the current state laws are differently focused, and each seems to address a specific AI usage and application. Over the next few years, it is likely that state laws will mirror the more comprehensive EU Act—like we saw when states enacted privacy laws similar to the EU General Data Protection Regulation (GDPR). Similar to privacy, companies should therefore begin modifying or enacting policies that not only deal with currently enacted GenAI legislation, but also potential legislation which will more closely resemble the more comprehensive EU Act. 

 

Keep Reading