01/15/2026 | Press release | Distributed by Public on 01/15/2026 11:07
Washington DC - U.S. Senator Lisa Blunt Rochester (D-Del.), a member of the Senate Commerce, Science, and Transportation Committee, led seven of her colleagues in sending a letter to some of America's largest tech and social media companies addressing the rise of non-consensual, sexualized, AI-generated images appearing on their platforms. The Senators ask critical questions regarding the platforms' policies around sexualized images, including those targeting minors, and seek clarity on their efforts to remove images, prevent distribution, and notify victims. In addition to Senator Blunt Rochester, the letter was signed by Senators Tammy Baldwin (D-Wis.), Richard Blumenthal (D-Conn.), Kirsten Gillibrand (D-N.Y.), Mark Kelly (D-Ariz.), Ben Ray Luján (D-N.M.), Brian Schatz (D-Hawaii), and Adam Schiff (D-Calif.).
Since these images began appearing on social media platforms, reports found that Grok has generated about one non-consensual, sexualized image per minute. The creation of these AI-generated images may be in violation of US laws against child sexual abuse material (CSAM) and nonconsensual intimate imagery (NCII) of adults.
"We are particularly alarmed by reports of users exploiting generative AI tools to produce sexualized 'bikini' or 'non-nude' images of individuals without their consent and distributing them on platforms including X and others," the Senators wrote. "These fake yet hyper-realistic images are often generated without the knowledge or consent of the individuals depicted, raising serious concerns about harassment, privacy violations, and user safety.
"Recent reporting has identified large volumes of AI-generated content depicting what appear to be underage girls in sexualized outfits or suggestive poses circulating on social platforms, sometimes attracting substantial engagement despite stated platform prohibitions," the Senators continued. "This report also found that accounts linked to off-platform groups selling illegal material, suggesting monetization pathways for AI-facilitated sexual exploitation. These developments point to a broader crisis of image-based abuse, amplified by AI, that undermines user trust and platform integrity."
"As policymakers, we are working to address this issue for our constituents," the Senators concluded. "Protecting the privacy, dignity, and safety of individuals, especially women and minors who are frequent targets, is a responsibility shared by platforms, policymakers, and the broader ecosystem."
The full letter can be found HERE and below.
To the Heads of Alphabet, Meta, X, TikTok, Snap, and Reddit:
We are alarmed by reports of users exploiting generative AI tools to produce sexualized "bikini" or "non-nude" images of individuals without their consent and distributing them on platforms including X and others. A recent WIRED report described users taking photos of fully clothed women and using AI chatbots to "undress" them into bikini-clad deepfakes, including by exchanging tips to bypass content filters. These fake yet hyper-realistic images are often generated without the knowledge or consent of the individuals depicted, raising serious concerns about harassment, privacy violations, and user safety.
The WIRED report describes incidents that underscore the scope of this problem. In one instance, a Reddit user requested that a photo of a woman wearing a sari be altered to appear as though she were wearing a bikini, and another user promptly produced and shared the manipulated image. This occurred in a community dedicated to evading AI safety measures, which was later removed for policy violations.
In addition, during the last week of December, X was filled with requests for Grok, its AI platform, to create non-consensual bikini photos based on users' uploaded images. While platforms may remove content once alerted, the ease with which users can generate and distribute these images highlights how generative AI is being misused to target individuals, disproportionately women, on social media.
We are also troubled by reports that minors are being targeted. Recent reporting has identified large volumes of AI-generated content depicting what appear to be underage girls in sexualized outfits or suggestive poses circulating on social platforms, sometimes attracting substantial engagement despite stated platform prohibitions. This report also found that accounts linked to off-platform groups are selling this illegal material, suggesting monetization pathways for AI-facilitated sexual exploitation. These developments point to a broader crisis of image-based abuse, amplified by AI, that undermines user trust and platform integrity.
We recognize that many companies maintain policies against non-consensual intimate imagery and sexual exploitation, and that many AI systems claim to block explicit pornography. In practice, however, as seen in the examples above, users are finding ways around these guardrails. Or these guardrails are failing. Even where outputs do not depict explicit nudity, they can still be non-consensual, sexualizing, and harmful. The public deserves transparency.
As policymakers, we are working to address this issue for our constituents. To better understand your current and planned efforts to curb the rise of non-nude sexualized deepfakes on your platforms, we request additional detail on the steps you are taking now and intend to take going forward. In addition, we want to confirm that robust protections and policies are in place. Please provide the following information and documents no later than January 29, 2026:
Your prompt attention to these questions is appreciated. This letter also serves as a formal request to preserve all documents and information, including but not limited to emails, text messages, internal chat logs, meeting notes, product requirements, risk assessments, enforcement guidance, and policy drafts, relating to the creation, detection, moderation, monetization, or policies regarding non-consensual sexual AI-generated manipulated images on your platforms. This preservation request covers both past and current efforts, as well as any planned or in-development measures responsive to the issues raised in this letter.
Thank you for your attention to this matter. We look forward to your response and to identifying practical steps to protect users from non-consensual, sexually exploitative AI-manipulated imagery. Protecting the privacy, dignity, and safety of individuals, especially women and minors who are frequent targets, is a responsibility shared by platforms, policymakers, and the broader ecosystem.
###
Senator Lisa Blunt Rochester represents Delaware in the United States Senate where she serves on the Committees on Banking, Housing, and Urban Affairs; Commerce, Science, and Transportation; Environment and Public Works; and Health, Education, Labor, and Pensions.