11/06/2025 | News release | Distributed by Public on 11/06/2025 06:52
The Chamber of Progress has asked the Trump Administration to intervene to ensure that all AI training is considered "fair use." You read that right: They're asking the White House to declare that every use of copyrighted material to train artificial intelligence systems is lawful, no matter the circumstances.
It's a radical proposal that would reward the largest technology platforms at the expense, if not demise, of publishers and creators. In the pursuit of unbridled profit, the Chamber is seeking to overturn more than two centuries of copyright law that has served our country well. U.S. copyright protections have long struck a balance between creators' rights and technological progress, ensuring that those who invest in producing art, journalism, music, film, and literature can be fairly compensated while still allowing for reasonable uses that advance learning and innovation.
Declaring all AI training "fair use" would blow up that balance. It would amount to a government-granted blank check for Silicon Valley's biggest players to strip-mine the creative economy. And while some of the Chamber's backers may cheer that result, it's important to note that not all technology companies share that view. Microsoft, for example, reportedly told publishers just last month, "You deserve to be paid on the quality of your IP." (Note that Microsoft apparently chooses not to be associated with The Chamber of Progress.)
So, let's take a moment to refresh our collective memory about what fair use actually is, as well as what it is not.
The concept of fair use is baked into U.S. copyright law. It provides limited exceptions for certain uses of copyrighted works without permission for purposes like criticism, commentary, news reporting, teaching, or scholarship. But whether a use is "fair" depends on a careful, case-by-case balancing test.
The law identifies four factors:
Each factor must be weighed. Fair use was designed to be flexible, not absolute and it should be wielded like a surgical tool, not a sledgehammer.
Courts are still working through the major questions of how copyright law should apply. But, in the two most recent cases, judges ruled, for different reasons, that AI models were likely developed unlawfully. In Bartz v. Anthropic, Judge Alsup held that training AI systems using lawfully acquired books could be "spectacularly transformative," comparing it to "training schoolchildren to write." It's worth noting that this case concerned books and not content like news articles where the potential for substitution is much greater. But even in that decision, he drew a bright line against using pirated or illegally obtained material, saying that would not qualify as fair use.
In the same district in Kadrey v. Meta, Judge Chhabria took a very different view barely 24 hours later. While he ultimately ruled for Meta, it was only because the plaintiffs couldn't yet show actual market harm. Importantly, the court rejected Alsup's "schoolchildren" analogy, calling it "inapt," and acknowledged that generative AI poses a qualitatively different threat to human authorship, particularly because it can flood the market with AI-generated substitutes for real creative work. His decision suggested that proving tangible market harm is key to overcoming the fair use defense.
Together, these early cases show that the courts are highly skeptical of AI companies' legal claims and that fair use in the AI era is anything but a settled question.
The cases now moving through the courts could reshape the entire landscape. The New York Times v. OpenAI is poised to be the most consequential yet. The Times alleges that OpenAI violated its terms of use, copied and reproduced its journalism without permission, and even regurgitated near-verbatim passages from Times' stories in its outputs. The judge largely denied a partial motion to dismiss in March 2025.
Similar suits from Disney, NBC Universal, Warner Bros. Discovery, and others allege that AI systems like Midjourney and Minimax have infringed on copyrighted characters and images, using them as raw material to generate new (and often derivative) outputs. These cases go beyond questions of data ingestion and look squarely at what the machines produce. When AI outputs contain or imitate protected creative expression, or produce outputs that can substitute for the original works, the argument that "training" is obviously a fair use becomes untenable.
That's what makes these lawsuits so strong: they don't rely on abstract theories about future market harm. They show the receipts by offering specific examples of copyrighted material appearing in AI-generated outputs or showing that outputs are otherwise substitutive, clear evidence that these tools are not merely "learning" but supplanting protected works.
Which brings us back to the Chamber of Progress and their remarkable plea for a government blank check. If the law were really on their side, they wouldn't need the President to intervene. The truth is, they're nervous. And they should be.
The Chamber represents the largest AI and tech firms in America, companies valued in the trillions of dollars, and those companies want to maintain margins and multiples no matter the cost to other historically and highly valuable segments of our economy. If courts continue to recognize that AI training and outputs can infringe on copyrighted works, Big Tech will have to negotiate more licenses and continue paying creators. And, by the way, more licensing agreements could actually prove helpful to AI systems by ensuring their products have reliable access to accurate, fact-checked content. However, no matter how The Chamber tries to spin it, that's not "anti-innovation." That's accountability.
The Chamber's proposed outcome would obliterate that accountability, retroactively blessing a decade of mass data scraping and granting legal immunity to the industry for whatever it does next. It's an act of desperation masquerading as policy. It's impunity masquerading as progress.
For two centuries, copyright law has powered one of the most dynamic creative economies in the world. It protects authors, journalists, musicians, filmmakers, and artists while still allowing room for innovation. The Chamber of Progress's proposal would dismantle that legacy overnight, transforming fair use from a balanced doctrine into a blanket permission slip.
As these cases move forward, the courts are doing their job: weighing evidence, applying the law, and adapting old principles to new technology. That's how progress is supposed to work in a democracy governed by the rule of law.
The Chamber may sense the writing on the wall. The creative industries are organized, the evidence is mounting, and the courts are increasingly skeptical of AI's "just-learning" defense. That's why they're now seeking the Administration's help to tilt the landscape in their favor.
Throughout history, new technology has tested the limits of copyright, from photocopiers to radio and television and the internet. But the courts have a long track record of determining how emerging tools fit within existing law. Innovation and creativity thrive together only when both are respected. Protecting the rights of those who produce original work ensures that progress benefits everyone. And that's more than fair enough.