Amy Klobuchar

08/20/2025 | Press release | Distributed by Public on 08/20/2025 14:35

ICYMI – Klobuchar Op-Ed in The New York Times: What I Didn’t Say About Sydney Sweeney

WASHINGTON - Senator Amy Klobuchar (D-MN) published an op-ed in The New York Times chronicling her recent experience being deepfaked, the risks of AI-generated videos, and the need to put in place rules of the road, including creating protections for those who have their voice and likeness replicated through AI without their permission.

From the op-ed:

There's a centuries-old expression that "a lie can travel halfway around the world while the truth is still putting on its shoes." Today, a realistic deepfake - an A.I.-generated video that shows someone doing or saying something they never did - can circle the globe and land in the phones of millions while the truth is still stuck on a landline. That's why it is urgent for Congress to immediately pass new laws to protect Americans by preventing their likenesses from being used to do harm. I learned that lesson in a visceral way over the last month when a fake video of me - opining on, of all things, the actress Sydney Sweeney's jeans - went viral.

On Jul. 30, Senator Marsha Blackburn and I led a Senate Judiciary subcommittee hearing on data privacy. We've both been leaders in the tech and privacy space and have the legislative scars to show for it. The hearing featured a wide-reaching discussion with five experts about the need for a strong federal data privacy law. It was cordial and even-keeled, no partisan flare-ups. So I was surprised later that week when I noticed a clip of me from that hearing circulating widely on X, to the tune of more than a million views. I clicked to see what was getting so much attention.

That's when I heard my voice - but certainly not me - spewing a vulgar and absurd critique of an ad campaign for jeans featuring Sydney Sweeney. The A.I. deepfake featured me using the phrase "perfect titties" and lamenting that Democrats were "too fat to wear jeans or too ugly to go outside." Though I could immediately tell that someone used footage from the hearing to make a deepfake, there was no getting around the fact that it looked and sounded very real.

As anyone would, I wanted the video taken down or at least labeled "digitally altered content." It was using my likeness to stoke controversy where it did not exist. It had me saying vile things. And while I would like to think that most people would be able to recognize it as fake, some clearly thought it was real. Studies have shown that people who see this type of content develop lasting negative views of the person in the video, even when they know it is fake.

X refused to take it down or label it, even though its own policy says users are prohibited from sharing "inauthentic content on X that may deceive people," including "manipulated, or out-of-context media that may result in widespread confusion on public issues." As the video spread to other platforms, TikTok took it down and Meta labeled it as A.I. However, X's response was that I should try to get a "Community Note" to say it was a fake, something the company would not help add.

For years I have been going after the growing problem that Americans have extremely limited options to get unauthorized deepfakes taken down. But this experience of sinking hours of time and resources into limiting the spread of a single video made clear just how powerless we are right now. Why should tech companies' profits rule over our rights to our own images and voices? Why do their shareholders and C.E.O.s get to make more money with the spread of viral content at the expense of our privacy and reputations? And why are there no consequences for the people who actually make the unauthorized deepfakes and spread the lies?

This particular video does not in any way represent the gravest threat posed by deepfakes. In July, it was revealed that an impostor had used A.I. to pretend to be Secretary of State Marco Rubio and contacted at least three foreign ministers, a member of Congress and a governor. And this technology can turn the lives of just about anyone completely upside down. Last year, someone used A.I. to clone the voice of a high school principal in Maryland and create audio of him making racist and antisemitic comments. By the time the audio was proved to be fake, the principal had already been placed on administrative leave and families and students were left deeply hurt.

There is no way to quantify the chaos that could take place going forward without legal checks. Imagine a deepfake of a bank C.E.O. that triggers a bank run; a deepfake of an influencer telling children to use drugs; or a deepfake of a U.S. president starting a war that triggers attacks on our troops. The possibilities are endless. With A.I., the technology has gotten ahead of the law, and we can't let it go any further without rules of the road.

As complicated as this technology is, some solutions are within reach. Earlier this year, President Trump signed the TAKE IT DOWN Act, which Senator Ted Cruz and I pushed to create legal protections for victims when intimate images, including deepfakes, are shared without their consent. This law addresses the rise in cases of predators using A.I. tools to create nude images of victims to humiliate or extort them. We know the consequences of this can be deadly - at least 20 children have died by suicide recently because of the threat of explicit images being shared without their consent.

That bill was only the first step. That is why I am again working across the aisle on a bill to give all Americans more control over how deepfakes of our voices and visual likenesses are used. The proposed bipartisan NO FAKES Act, cosponsored by Senators Chris Coons, Marsha Blackburn, Thom Tillis and me, would give people the right to demand that social media companies remove deepfakes of their voice and likeness, while making exceptions for speech protected by the First Amendment.

Read the full op-ed here.

###

Amy Klobuchar published this content on August 20, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on August 20, 2025 at 20:35 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]