DCN - Digital Content Next

08/20/2025 | News release | Distributed by Public on 08/20/2025 06:20

AI on trial: copyright, paywalls, and publisher power

On June 23, U.S. District Judge William Alsup ruled in Bartz, et al. v. Anthropic (N.D. Cal.) that Anthropic's use of copyrighted works to train its AI models was "exceedingly transformative." The court ruled in favor of Anthropic in three of the four fair use factors.

While Anthropic praised the mostly favorable ruling, it was left to contend Judge Alsup's finding that the company's use of pirated material was not fair use. Drawing a firm line in the sand, he called the practice "inherently, irredeemably infringing even if the pirated copies are immediately used for the transformative use and immediately discarded." He also warned of the potential for "a trial on the pirated copies used to create Anthropic's central library and the resulting damages."

However, it wasn't that warning that set off what Professor Edward Lee called "a legal fight for [Anthropic's] very existence." Rather, it was a move from Judge Alsup that arrived nearly a month after the highly publicized Bartz decision. And this move has significant implications for media companies navigating the uncharted territory of generative AI.

The 1.5 trillion-dollar question

On July 17, Judge Alsup granted class certification to the authors whose pirated books were obtained by Anthropic in Bartz, et al. v. Anthropic(N.D. Cal.) (specifically, the LibGen &PiLiMi classes were certified, while the Book3 class was not.) In his ruling, Judge Alsup found that members of the class can be identified by data maintained by Anthropic, and that there is a commonality in the issues in the authors' complaint and Anthropic's defense. Judge Alsup also found that the authors need class certification to face the "formidable" resources of Anthropic.

In certifying this class, Judge Alsup made Anthropic possibly liable for what the company argues are "ruinous statutory damages." With damages that could reach $150,000 per copyrighted work, and a class that could encompass up to 7 million copyrighted works, Anthropic could soon face a $1.5 trillion payout to class members, a sum that would be nearly 25 times its estimated market value.

What it means for the AI and media industries

On July 14, Anthropic filed a motion requesting Judge Alsup to either allow an appeal to the 9th Circuit or reconsider his judgement on the piracy question, taking into consideration the recent ruling in Kadrey v. Meta Platforms, Inc. by District Judge Vince Chhabria. In it, the company clearly laid out the industry's fear that, if Judge Alsup's judgement on piracy were to be widely adopted, "training by any company that downloaded works from third-party websites like LibGen or Books3 could constitute copyright infringement." This appears to confirm that training AI models on pirated content is an industry-wide practice. It also suggests that it is common enough to create a concern of significant legal liability for industry.

This fear, combined with the class certification, offers a possible roadmap for the future of litigation pertaining to AI and copyright. Cases going forward, and even ones that are ongoing, will surely include arguments relaying how training conducted by AI companies amounts to piracy. Additionally, facing fears of legal liability, AI companies will be further incentivized to reach licensing agreements with publishers and creatives.

Where do we go from here?

Media executives can capitalize on this newfound concern to secure new licensing agreements. They can and should ensure that these agreements are as beneficial to publishers and creators as possible. In the case of digital publishers, they must seize this moment as an opportunity to strengthen arguments pertaining to how the bypassing of their paywalls is tantamount to piracy and continue to gather and share data that showcases the prevalence of bots that bypass their paywalls to scrape content for the purposes of AI training.

To formulate and secure effective licensing agreements, legal and policy work remains to be done. The Digital Millennium Copyright Act (DMCA) provides both opportunities and challenges in this respect. In her NYU Law Review essay "Yes, It's Illegal to Cheat a Paywall," Theresa M. Troupson argues that the 9th Circuit's interpretation applied in MDY Industries, LLC v. Blizzard Entertainment, Inc(in which theCourt found that Congress created an "anti-circumvention right" under the DMCA), "creates new liability for the act of circumvention alone, regardless of any connection to the traditional exclusive rights of the copyright holder."

This interpretation of the DMCA has not yet been widely adopted by the courts. However, Troupson's argument that "a user who circumvents a newspaper paywall to read an article incurs liability under the statute for the very act of circumvention," undeniably provides a path forward for publishers and media companies in what may be a new piracy-focused chapter of the ongoing AI and copyright saga.

While at first glance the Bartz decision provides AI companies with a significant victory that validates their practices, it also provides publishers and media companies with an opportunity to reframe their arguments to protect their copyrighted material. The media and publishing industry will have to contend with the outcome of the upcoming trial, as well as the fair use precedent set by Judge Chhabria in Kadrey, which other courts are likely to follow. However, the piracy argument introduced in the Bartz case could possibly deliver long sought after victories for the industry.

The views expressed do not necessarily reflect the views of DGA Group or its clients.
AI
DCN - Digital Content Next published this content on August 20, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on August 20, 2025 at 12:21 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]