In our last article on Disney & Universal v. Midjourney, we warned that the case was just the opening act of a much bigger fight over generative AI and copyright. Since then, the courtroom has gone quiet, but the stakes have only grown louder. This month, another lawsuit highlighted just how unprepared the law is. In Authors Guild v. Anthropic, a US judge ruled that Anthropic’s use of millions of pirated books to train its Claude chatbot did not violate copyright. The company’s actions, however, tell another story, at least to me, as an almost grotesque literalization of AI’s appetite for creative work.

The law may have sided with Anthropic for now, but these battles reveal something deeper: a disconnect between what is legal and what is ethical, and the costs that everyone, artists, readers, developers and the public will bear if the industry continues unchecked.

The Memory of Machines

At the heart of the lawsuits, which we talked about more in the previous article, is a technical and philosophical question: how much does a machine “remember”?

If we go back to the previous article on the Midjourney v. Disney lawsuit, it’s quite a point where each side stands. Generative AI companies argue their models merely learn statistical patterns and do not store or reproduce works verbatim. Plaintiffs counter that outputs often closely resemble copyrighted material (sometimes even word-for-word or pixel-for-pixel), because models can memorize rare or unique data.

The Midjourney case exemplifies this: plaintiffs allege the system generates derivative works of copyrighted Disney and Universal characters, undermining their creative control. Midjourney insists these are “inspired” works, akin to a human drawing Mickey Mouse from memory.

In the Anthropic case, Judge William Alsup leaned heavily on this analogy, describing the Claude chatbot as “a reader aspiring to be a writer” – absorbing books, learning styles, and creating new material, which he deemed fair use. But this analogy falters at scale: no human “reader” ingests seven million books in weeks, nor destroys the originals in the process. That behaviour is uniquely industrial, and it underscores the profound asymmetry between creators and the machines consuming their work.

The Law Is Lagging Behind

The courts remain inconsistent and ill-equipped for these questions. The Midjourney lawsuit, alongside those filed against Stability AI, OpenAI, Anthropic, and others, has yet to establish clear precedent. Early rulings — such as US Judge Alsup’s — have favoured AI firms by framing training as transformative and beneficial to the public, thus more likely to fall under fair use.

But this reasoning often ignores the scale, automation, and economic displacement involved. Models are not simply “learning” like humans but exploiting massive, unauthorized datasets to compete against the very creators who made them possible.

The risk is that inconsistent rulings in one domain (e.g., text) could ripple into others (images, video, music), solidifying a dangerous status quo that favours corporate efficiency over creative rights. A clear ruling could either force meaningful licensing regimes or cement permissive norms that leave creators unprotected.

If the AI Industry Loses

If courts ultimately side with rights holders, the AI industry faces a reckoning. Likely outcomes include mandatory licensing of training data, court-ordered damages and content blacklists to prevent future misuse. Some governments could even mandate closed-source models to enforce compliance and transparency.

These measures, while protective of creators, would have profound consequences for innovation. Startups could be priced out of the market entirely, leaving only Big Tech able to afford access. Research and experimentation would slow, and the open-source ecosystem, which realistically has fuelled much of AI’s progress — could collapse under the weight of legal risk.

In short, protecting creative rights without crushing open innovation will require a level of nuance the courts have yet to demonstrate.

If the Media Industry Wins Too Much

But there’s danger in overcorrecting. If the courts or legislatures grant media companies too much power, they risk stifling creativity and reinforcing the very monopolies that generative AI threatens.

Overly aggressive IP enforcement could limit legitimate forms of parody, homage, and transformative art, raising the barrier of entry for independent creators and codifying a stranglehold on cultural production. As history shows, copyright law often expands to protect incumbent interests rather than balance public and private good.

Let’s take the familiar example: A world where only Disney can draw Disney, and only under Disney’s terms, is not necessarily a victory for human creativity. Courts must strike a balance that protects creators without eliminating the freedom to remix, reimagine and innovate.

The Anthropic Book Bonfire

Nowhere is the tension between legality and ethics more vivid than in Anthropic’s case.

When sued by authors for training its Claude chatbot on pirated books, Anthropic admitted it had acquired the works without permission. To “fix” this, it purchased physical copies of the books and destroyed them. According to Ars Technica, the company shredded, scanned, and discarded seven million volumes, digitizing the text while leaving nothing behind. There are less destructive ways to digitize books, but they are slower.

Judge Alsup ruled this did not constitute copyright infringement, again likening Claude to a student learning from books and creating something new. But the reality is far more cynical.

I’d like to accent that this is not a student quietly learning at a library. This is a corporation vacuuming up the entire library overnight, grinding the books into pulp, and declaring victory. It highlights the ruthlessness of an industry built on the promise of speed and scale, an industry willing to literally destroy literature to feed its models and avoid accountability. Very dystopian I’d say, what’s next, burning books?

Even if the courts deem such practices legal, they betray a disregard for the cultural value of what is consumed.

Not Just About the Law

The copyright wars over AI are not simply about legality, but most importantly, they are about values.

AI companies may win in court yet lose public trust, alienating the very creators, readers, and users who make their products viable. If the industry continues to treat human creativity as raw material to be strip-mined, it risks undermining not only culture but also its own legitimacy.

Yet we can’t ignore the reality that technology moves faster than the courts. Generative AI is already embedded in how people work, create, and communicate. Users aren’t going to stop using tools that unlock productivity and possibility, even if the industry behind them remains ethically messy.

Developers, artists, and the public all deserve better than an arms race of lawsuits and corporate overreach. The future of generative AI depends on crafting practices — and policies — that balance innovation with respect for those who created the culture it builds upon.

The law has spoken for now, but the larger question remains: how much and what are we willing to sacrifice just to keep the machines running?