On 2 August 2025, the European Union’s Artificial Intelligence Act (AI Act) stopped being a looming policy threat and became binding law, at least for one critical sector. Obligations for providers of general-purpose AI (GPAI) models officially came into force for any model placed on the market on or after this date.
The European Commission (EC) and the newly established EU AI Office had spent the preceding months in overdrive. They released their Guidelines for providers of general-purpose AI models and a final General-Purpose AI Code of Practice (the Code), which the Commission formally approved on 1 August. These documents serve as the first real interpretative framework for what compliance will look like in practice.
The Guidelines interpret the legal obligations. The Code offers a menu of specific, recommended measures that while voluntary, are designed to help GPAI providers demonstrate that they meet the AI Act’s standards. Transparency, copyright safeguards, and safety/security measures are threaded throughout, though the Commission admits the Code itself cannot override the law.
Big names sign on early, except when they don’t
Twenty-six companies became the inaugural signatories to the GPAI Code of Practice. The list reads like a who’s-who of Big Tech and AI: Amazon, Google, Microsoft, IBM, OpenAI, France’s Mistral AI, and Germany’s Aleph Alpha all signed.
Not everyone was eager to align though. Meta refused outright, declaring the Code a brake on innovation and warning that “Europe is heading down the wrong path on AI.” The company will still be bound by the AI Act’s obligations, but it will have to prove compliance without the Code’s ready-made framework.
xAI, the company behind X’s Grok chatbot, chose a halfway house, signing only the Safety and Security chapter. For transparency and copyright, it will have to chart its own compliance route. On a personal note, this is far from a surprise, considering the trending X content on misogyny and disinformation.
Even among those who signed, enthusiasm was mixed. Google’s president of global affairs, Kent Walker, called the final Code “closer to supporting Europe’s innovation and economic goals,” but maintained that both the AI Act and the Code still risk slowing Europe’s AI development.
Under the Commission’s rules, providers with existing GPAI models on the market needed to sign before 1st August; new entrants can sign later. From 2nd August, all 27 EU member states were also expected to have designated national oversight authorities to enforce the AI Act. Breaches could carry fines of up to €15 million or 3% of a company’s global turnover, basically whichever is higher at the point.
Transparency meets copyright
In theory, the AI Act is a milestone: the world’s first comprehensive legal framework for AI, designed to keep systems “safe, transparent, traceable, non-discriminatory and environmentally friendly.” It classifies AI into four risk tiers: minimal, limited, high, and unacceptable, with outright bans on the latter category (such as manipulative AIs or social scoring systems).
Most generative AI, whether image, text, or music, sits in the minimal risk category. Even so, providers must now publish summaries of the copyrighted datasets used in training. This requirement is, on paper, a win for transparency.
But transparency without enforcement is just polite disclosure. For artists, composers, and writers, the Act still leaves a gaping hole: there is no clear, enforceable way to opt out of having one’s work scraped for AI training. Nor is there a mechanism for retroactive compensation for works already ingested by models.
As we’ve covered in previous articles, most notably in our breakdown of Disney and Universal v. Midjourney, this is the central flaw in Europe’s copyright and AI regime. Rights holders can “reserve their rights” under EU copyright law to block text and data mining, but there’s no functional system to make that reservation meaningful in the AI training context. As Marc du Moulin, secretary general of the European Composer and Songwriter Alliance (ECSA), put it:
“You don’t know how to opt out, but your work is already being used.”
That sentiment is echoed by GESAC, the European Grouping of Societies of Authors and Composers, whose general manager, Adriana Moscono, says her members have tried the direct route, like emails and letters to AI firms requesting licenses for their content. The result? Silence.
The GPAI Code of Practice gestures toward copyright protection: committing signatories to have policies, safeguards, and a designated complaint process. But those are voluntary undertakings, and the Act itself offers no retroactive relief. Anything scraped before 2 August is, as du Moulin says, “a free lunch for generative AI providers who did not pay anything.”
All on paper is fine, but who protects me, as a creator? I will have to stand for myself as in the case of big names like Crunchyroll and fight against copyright overreach and corporate hypocrisy? I managed my way out legally and loudly, but would an average user do the same? Read more on that here.
The lawsuits circling the field
The copyright tension isn’t hypothetical; it’s already in court for a while. Germany’s Society for Musical Performing and Mechanical Reproduction Rights (GEMA) has filed lawsuits against OpenAI and Suno AI, targeting AI music generation.
While these cases aren’t strictly under the AI Act, their outcomes could shape how the Act’s copyright provisions are interpreted. If courts hold that training without licensing constitutes infringement, it could force the Commission to rethink its current hands-off stance on retroactive use.
The European Court of Justice has signalled that it will review the text and data mining exceptions introduced in 2019, the very loophole that allows companies to scrape copyrighted works unless explicitly blocked. But until that review yields results, the system is still tilted toward AI companies.
We’ve seen similar dynamics in the United States, where our previous coverage of Anthropic v. Authors Guild and the sprawl of generative AI copyright litigation shows the same core issue: copyright frameworks designed for a pre-AI world cannot keep pace with industrial-scale dataset harvesting.
A regulatory cart before the horse
Even supporters of the AI Act admit the regulation arrived late to the party. By the Commission’s own timeline, new AI companies have until 2026 to fully comply, with existing operators getting until 2027. In AI years, that’s an eternity, more than enough time for another generation of models to be trained on unlicensed work. Let me ask you, 3 years ago, AI generated images had 7 fingers, however now Midjourney or ChatGPT create posters and accurate text. What will we create and destroy, even unknowingly, in those 2 years?
This “cart before the horse” problem is structural. The law assumes the market can be nudged toward good behaviour through voluntary codes and post-hoc transparency. But the datasets that built today’s leading models are already locked away in corporate silos, immune to future licensing requirements.
For creators, that means the AI Act is more about regulating the next training run than righting the wrongs of the last one. That might make sense for operational feasibility, but it leaves the cultural and economic damage to artists unaddressed.
Innovation vs. accountability: tug-of-war with modern systems
Meta’s refusal to sign the GPAI Code is telling. Their claim that Europe is “heading down the wrong path” is part of a broader industry push to frame regulation as a brake on innovation. Google’s more diplomatic version. supportive language wrapped around clear reservations, illustrates the same tension.
The truth is that regulation is a brake, and deliberately so. The point of the AI Act is to slow down the riskiest behaviours and force providers to bake in safeguards. The point is NOT to stall technological progress, but encourage the right ways to use it. The industry wants speed; the lawmakers want guardrails. The collision was inevitable.
As someone in the biotech industry, let’s image what would be if we took medicine without regulations. You think that animal testing would be the biggest cruelty? In the end, the catastrophe of the side effects of thalidomide triggered strict FDA rules forming. In the late 1950s and early 1960s, the use of thalidomide in 46 countries was prescribed to women who were pregnant or who subsequently became pregnant, and consequently resulted in the “biggest anthropogenic medical disaster ever,” with more than 10,000 children born with a range of severe deformities, as well as thousands of miscarriages.
Do we really need a catastrophe to understand that regulation is a bureaucratic burden with a reason?
What *probably* happens next
From here, some ideas are clear:
- Corporate compliance will become a staged performance. Signatories will tout their Code adherence; non-signatories will promise “equivalent measures” and lobby for looser interpretations.
- National oversight authorities: many still in the process of scaling up, will be tested on their ability to enforce a complex, cross-border law against some of the world’s most powerful companies.
- Creators’ groups will keep pressing for mandatory blanket licensing schemes, and for the Commission to close the opt-out loophole before the next wave of model training.
- Litigation will act as the real enforcement mechanism in the short term, as in the GEMA lawsuits, with court rulings potentially doing more to define copyright boundaries than the AI Act itself.
Why this matters for AI literacy
As we’ve argued in earlier pieces, transparency in AI isn’t just about “trust”. We need to start creating the conditions for informed public oversight. If you don’t know what went into a model, you can’t meaningfully debate whether its outputs are ethical, lawful or socially acceptable.
The AI Act’s transparency requirements are a start, but they still place the burden of action on individuals and rights holders. For everyday users, the law will do little to change the opacity of commercial AI systems in the near term.
For creators, the Act represents a partial victory: a recognition that copyright matters in AI training, but without the teeth to make that recognition retroactive or automatic.
For the industry, the law is both a compliance hurdle and a shield. Those who meet its baseline can claim to be “responsible AI” providers, regardless of whether their datasets were ethically sourced.
And for the public, the stakes are higher than most realise. Knowing not just how to use a system, but how it was built, is the foundation for meaningful engagement in the policy debates ahead. Without it, the conversation stays stuck in corporate talking points and abstract fears.
The unfinished work
The EU AI Act is the first of its kind, but it is not the finish line. It leaves retroactive harms unaddressed, punts on licensing and relies on voluntary codes to shape corporate behaviour.
We’ll be returning to the copyright fight in depth in a forthcoming piece, unpacking the GEMA lawsuits, the potential overhaul of the text and data mining exception, and the cross-Atlantic parallels in AI litigation.
For now, the August 2nd milestone is a reminder that regulation is not a single event but a long and extensive process. The AI Act may be in force, but the story doesn’t end here. Beyond compliance deadlines and copyright loopholes lies another front entirely – the influence game. Lobbyists, industry associations and advocacy groups are already shaping how the AI Act and future legislation will be interpreted and enforced. That fight deserves its own spotlight, and we’ll turn to it in our next research.