Artificial intelligence (AI) technologies, exemplified by tools like ChatGPT, have permeated everyday life, raising significant regulatory challenges. As AI continues to evolve rapidly, both the European Union (EU) and the United States (US) are spearheading efforts to create comprehensive legislative frameworks to address a spectrum of concerns, from privacy and job displacement to security risks.

In this article, we will look into the major differences between US and EU AI legislation, including everything you need to know about timelines and developments. 

Let’s start with the EU AI Act.

The European Union initiated the drafting of the AI Act in 2018 in response to the rapid developments and increasing integration of AI technologies across various sectors. Recognizing the potential implications of such technologies, the EU was in a hurry to establish a framework that would accommodate both the speed of technological advancements while addressing emerging risks. 

The first draft of the AI Act was released in 2021 and lacked the capability to regulate significant advancements in AI, especially in the realm of generative AI technologies which include tools capable of producing human-like text, images, and other media, like Stable Diffusion and Midjourney. 

By February 2024, the legislative process had reached a crucial milestone with all 27 EU member states unanimously agreeing to the revised AI Act. 

On Tuesday morning, May 21st, the Council of Ministers endorsed the EU AI Act, following approval from the European Parliament, another key legislative body of the EU. The Act is set to take effect in August, with its requirements being implemented in stages thereafter. Concurrently, efforts to develop harmonised European Standards are underway to facilitate the practical application of the Act’s mandates. It is anticipated that General Purpose AI models will need to comply with these standards within 12 months of the Act’s implementation.

What does risk-based classification mean?

The AI Act is a risk-based system, which means that AI is divided into categories of use and the risk they impose. As from regulatory experience, the compliance process will take longer, the risk is higher, and the documentation presented would need to be more detailed. The EU will focus on overall transparency and legality of the training data, with focus on sustainability and safety. The Act divides the risk classification as:

  • Unacceptable Risk: This category includes AI applications that are considered a clear threat to safety, rights, or democratic values. Examples include AI systems designed to manipulate human behaviour to circumvent users’ free will (e.g., exploitative AI that targets vulnerable populations), or systems that allow ‘social scoring’ by governments.
  • High Risk: AI systems that enter this category are subject to stringent compliance requirements due to their potential impact on critical areas such as healthcare, policing, and legal decision-making. For instance, AI used in recruitment processes, critical infrastructure, healthcare and educational admissions must adhere to strict transparency and data handling standards to prevent biases and ensure fairness.
  • Limited Risk: AI applications with more benign implications, like AI-enabled video games or spam filters, fall under this category. While these systems are less scrutinized, minimal regulations ensure they do not become intrusive or manipulative.

Want to know more about the impact of the AI Act? Read here.

Now let’s move onto an overview of the US AI Legislative Proposals.

SAFE Innovation Framework

Introduced in June 2023 by Senate Majority Leader Chuck Schumer, alongside bipartisan support from Senators Martin Heinrich (D-NM), Todd Young (R-IN), and Mike Rounds (R-SD), the SAFE Innovation Framework aims to guide the development of AI technology while ensuring it adheres to key principles of security, accountability, foundations, explainability, and innovation. This framework does not establish strict regulations but instead provides a set of principles.The Framework also prompted the initiation of AI Forums, confidential educational sessions for Senators conducted by AI experts, with the first session held in September 2023.

Bipartisan Framework for US AI Act

In September 2023, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO), both members of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, introduced the Bipartisan Framework for US AI Act. This framework proposes more definitive policy measures than the SAFE Innovation Framework, focusing heavily on transparency and consumer protection, including specific provisions for protecting children. Key components of this framework include:

  • Licensing for AI Systems: It suggests the establishment of an independent body to oversee AI system licensing. 
  • Monitoring and Reporting: An entity would also be responsible for ongoing monitoring of AI developments and their economic impacts.
  • Section 230 Immunity: One notable policy proposal within this framework is the removal of Section 230 immunity for AI companies, which currently shields internet service providers from lawsuits over content generated by users.
  • National Security Protections: Additional measures to prevent foreign impact from accessing advanced AI technologies are included to safeguard national interests.

National AI Commission Act

The National AI Commission Act was introduced by a bipartisan group of House members in June 2023. The proposed commission would consist of 20 members appointed by the President and Congress, with balanced representation from both political parties. The commission’s mandate includes:

  • Developing a Risk-Based Regulatory Framework: The commission is tasked with creating a comprehensive framework for AI regulation in the US that balances innovation with risk management.
  • Inclusive Stakeholder Engagement: Members would include experts from AI technology sectors, industry leaders, and government and national security specialists.
  • Reporting and Recommendations: The commission would prepare three reports over an 18-month period, detailing recommendations for AI policy and regulatory approaches.
Comparison of regulatory approaches – EU vs. US Strategies:
  • Risk Assessment: The EU’s approach is heavily risk-focused, categorizing AI systems to tailor regulatory measures accordingly. In contrast, the US proposals generally emphasize fostering innovation alongside managing risk, reflecting a more balanced approach.
  • Innovation vs. Regulation: The EU’s stringent categorization could potentially stifle innovation by imposing heavy restrictions on high-risk AI applications. The US, while aware of the risks, appears more focused on maintaining its technological leadership and innovation capabilities.
Enforcement and Compliance:
  • EU Challenges: The EU faces challenges in enforcing its comprehensive regulations across all Member States, necessitating significant coordination and resources.
  • US Prospects: The proposed frameworks in the US suggest establishing new bodies or enhancing existing ones to oversee AI development, aiming for effective compliance without overly hampering technological advances.
Implications for AI Development and Industry

The diverse regulatory landscapes in the EU and US suggest future challenges and opportunities for the AI industry:

  • Global Impact: The EU’s regulations may set a benchmark for other regions, influencing global standards and practices.
  • Commercial Development: The US’s emphasis on innovation could drive faster AI advancements and potentially influence global market trends.
The Future of AI Regulation

As the EU moves towards the enactment of its AI Act in 2024 and the US refines its regulatory frameworks, the outcomes will significantly influence the direction of AI technology development and governance. The interaction between stringent EU regulations and more flexible US proposals will likely shape global AI practices and industry growth, highlighting the need for ongoing international dialogue and cooperation in AI policy-making.