Kirsten E. Gillibrand

01/16/2026 | Press release | Distributed by Public on 01/16/2026 18:41

Gillibrand, Colleagues Demand Tech Giants Take Down Sexualized AI Images, Protect Minors

Gillibrand, Colleagues Demand Tech Giants Take Down Sexualized AI Images, Protect Minors

Jan 16, 2026

U.S. Senator Kirsten Gillibrand joined seven Democratic colleagues in calling on America's largest tech and social media companies to address the rise of non-consensual, sexualized, AI-generated images being created and appearing on their platforms.

In a letter to the companies, the senators expressed alarm over the spread of this content and asked for more information on how platforms plan to remove such images, prevent their distribution, and notify victims.

Since these images began appearing on social media platforms, reports found that Grok was generating about one non-consensual, sexualized image per minute. The creation of these AI-generated images may be in violation of U.S. laws against child sexual abuse material (CSAM) and nonconsensual intimate imagery (NCII) of adults.

"Americans should be able to post images of themselves and their children online without fear of fake, sexualized images being produced by bullies and pedophiles," said Senator Gillibrand. "Platforms have a responsibility to stop their AI tools from being used to harass, exploit, and endanger people. We must work together to remove this content, shut down abusers, and protect Americans and minors from harm. I will continue fighting to protect New Yorkers online and stop these platforms from becoming safe havens for exploitation and criminal abuse."

In addition to Senator Gillibrand, the letter to social media companies was signed by Senators Lisa Blunt Rochester (D-DE), Tammy Baldwin (D-WI), Richard Blumenthal (D-CT), Mark Kelly (D-AZ), Ben Ray Luján (D-NM), Brian Schatz (D-HI), and Adam Schiff (D-CA).

"We are particularly alarmed by reports of users exploiting generative AI tools to produce sexualized 'bikini' or 'non-nude' images of individuals without their consent and distributing them on platforms including X and others," the senators wrote. "These fake yet hyper-realistic images are often generated without the knowledge or consent of the individuals depicted, raising serious concerns about harassment, privacy violations, and user safety.

"Recent reporting has identified large volumes of AI-generated content depicting what appear to be underage girls in sexualized outfits or suggestive poses circulating on social platforms, sometimes attracting substantial engagement despite stated platform prohibitions," the senators continued. "This report also found that accounts linked to off-platform groups selling illegal material, suggesting monetization pathways for AI-facilitated sexual exploitation. These developments point to a broader crisis of image-based abuse, amplified by AI, that undermines user trust and platform integrity."

"As policymakers, we are working to address this issue for our constituents," the senators concluded. "Protecting the privacy, dignity, and safety of individuals, especially women and minors who are frequent targets, is a responsibility shared by platforms, policymakers, and the broader ecosystem."

The full letter can be found HERE and below.

To the Heads of Alphabet, Meta, X, TikTok, Snap, and Reddit:

We are alarmed by reports of users exploiting generative AI tools to produce sexualized "bikini" or "non-nude" images of individuals without their consent and distributing them on platforms including X and others. A recent WIRED report described users taking photos of fully clothed women and using AI chatbots to "undress" them into bikini-clad deepfakes, including by exchanging tips to bypass content filters. These fake yet hyper-realistic images are often generated without the knowledge or consent of the individuals depicted, raising serious concerns about harassment, privacy violations, and user safety.

The WIRED report describes incidents that underscore the scope of this problem. In one instance, a Reddit user requested that a photo of a woman wearing a sari be altered to appear as though she were wearing a bikini, and another user promptly produced and shared the manipulated image. This occurred in a community dedicated to evading AI safety measures, which was later removed for policy violations.

In addition, during the last week of December, X was filled with requests for Grok, its AI platform, to create non-consensual bikini photos based on users' uploaded images. While platforms may remove content once alerted, the ease with which users can generate and distribute these images highlights how generative AI is being misused to target individuals, disproportionately women, on social media.

We are also troubled by reports that minors are being targeted. Recent reporting has identified large volumes of AI-generated content depicting what appear to be underage girls in sexualized outfits or suggestive poses circulating on social platforms, sometimes attracting substantial engagement despite stated platform prohibitions. This report also found that accounts linked to off-platform groups are selling this illegal material, suggesting monetization pathways for AI-facilitated sexual exploitation. These developments point to a broader crisis of image-based abuse, amplified by AI, that undermines user trust and platform integrity.

We recognize that many companies maintain policies against non-consensual intimate imagery and sexual exploitation, and that many AI systems claim to block explicit pornography. In practice, however, as seen in the examples above, users are finding ways around these guardrails. Or these guardrails are failing. Even where outputs do not depict explicit nudity, they can still be non-consensual, sexualizing, and harmful. The public deserves transparency.

As policymakers, we are working to address this issue for our constituents. To better understand your current and planned efforts to curb the rise of non-nude sexualized deepfakes on your platforms, we request additional detail on the steps you are taking now and intend to take going forward. In addition, we want to confirm that robust protections and policies are in place. Please provide the following information and documents no later than January 29, 2026:

  1. Official policy definitions of "deepfake" content, "non-consensual intimate imagery," or similar terms, and whether these definitions explicitly cover sexually suggestive but non-nude content.
  2. A description of your policy and enforcement approach for non-consensual sexualized AI manipulations that are non-nude, including but not limited to altered clothing, body-shape edits, and "virtual undressing" that stops short of explicit nudity.
  3. Documents sufficient to describe your current content policies addressing manipulated media and sexually implied or explicit content, including but not limited to terms of service sections and internal guidance used by moderators.
  4. Documents sufficient to describe your current policies governing AI tools such as chatbots or image generators related to sexually suggestive or intimate content.
  5. A description of what preventive technical measures or guardrails have you implemented to prevent the creation or distribution of non-consensual deepfakes, such as filtering prompts, blocking image-edit requests, or automated detection.
  6. A description of how you proactively identify deepfake content, and if that includes hashing or fingerprinting to prevent re-uploads of known abusive images and videos.
  7. A description of how you prevent users or accounts from profiting from non-consensual AI-generated sexual content on your platform (for example, ads, subscriptions, paid groups, affiliate links, or referral funnels), and a description of how you prevent your platform from inadvertently monetizing such content.
  8. A description of your terms of service related to temporarily or permanently removing users from your platform for posting or sharing violative deepfake content.
  9. A description of your practices related to notifying victims when you identify non-consensual sexual deepfakes targeting an individual.

Your prompt attention to these questions is appreciated. This letter also serves as a formal request to preserve all documents and information, including but not limited to emails, text messages, internal chat logs, meeting notes, product requirements, risk assessments, enforcement guidance, and policy drafts, relating to the creation, detection, moderation, monetization, or policies regarding non-consensual sexual AI-generated manipulated images on your platforms. This preservation request covers both past and current efforts, as well as any planned or in-development measures responsive to the issues raised in this letter.

Thank you for your attention to this matter. We look forward to your response and to identifying practical steps to protect users from non-consensual, sexually exploitative AI-manipulated imagery. Protecting the privacy, dignity, and safety of individuals, especially women and minors who are frequent targets, is a responsibility shared by platforms, policymakers, and the broader ecosystem.

###

Kirsten E. Gillibrand published this content on January 16, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on January 17, 2026 at 00:41 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]