San Francisco federal judge has delivered a mixed ruling in a copyright clash between Anthropic, an AI company backed by Amazon and Alphabet, and authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson.
Judge William Alsup ruled that using books to train Anthropic’s Claude AI model qualifies as “fair use” under U.S. copyright law, but storing millions of pirated books crosses a legal line.
Alsup likened Anthropic’s AI training to a “reader aspiring to be a writer” who studies books “not to race ahead and replicate or supplant them” but to “turn a hard corner and create something different.”
Anthropic leaned into this, arguing that U.S. copyright law “not only allows, but encourages” its AI training because it promotes human creativity.
The company explained that its system copied the books to “study plaintiffs’ writing, extract uncopyrightable information from it, and use what it learned to create revolutionary technology.”
An Anthropic spokesperson added, “We are pleased the court recognised [our] AI training was transformative and consistent with copyright’s purpose in enabling creativity and fostering scientific progress.”
However, the judge wasn’t so lenient about Anthropic’s storage of over 7 million pirated books in what he called a “central library of all the books in the world” not necessarily used for AI training.
This, Alsup ruled, violated the authors’ copyrights. He noted that Anthropic’s later purchase of millions of print books doesn’t erase the initial theft: “That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft.”
A December trial will determine damages, which could reach $150,000 per work for willful infringement. Other AI giants like OpenAI and Meta face similar accusations of using pirated digital books to train their systems.
This case, part of a proposed class action filed last year, reflects a broader battle between AI companies and creators.
AI firms argue that their systems transform copyrighted material into innovative tools and that paying copyright holders could “hamstring the nascent industry.”
Meanwhile, authors and publishers in the U.S. and UK claim AI companies are “unlawfully copying their work to generate competing content that threatens their livelihoods.”
The ruling has no direct impact in the UK, where fair use is stricter, according to Giles Parsons of Browne Jacobson. “The UK has a much narrower fair use defence which is very unlikely to apply in these circumstances,” he said.
UK copyright law allows limited use of protected works for research, but a proposed change to permit broader use—unless authors opt out—has sparked fierce opposition from the creative industries.
John Strand, a copyright lawyer at Wolf Greenfield, called the ruling “very significant,” noting its influence on dozens of similar U.S. cases. “The expectation is that at some point the primary question of whether training LLMs on copyrighted materials is fair use likely will be addressed by the U.S. Supreme Court,” he said.
As legal battles mount, the tension between AI innovation and creators’ rights remains a heated frontier.
More from
Digital Learning
category
Get fun learning techniques with practical skills once a week to keep your child engaged and ahead in life.
When you are ahead, your kids are ahead.
Join 1000+ parents.