03/21/2026 | News release | Distributed by Public on 03/21/2026 15:50
Whispers followed her offline. Online, the abuse imploded, unchecked: comments, ridicule, shares, screenshots. She had never consented to any of it. That hadn't stopped anyone.
Within minutes, thousands had seen the content. Within hours, millions.
The nightmare had only begun.
Days passed before platforms responded. By then, the images had been seen, save, and replicated. She was left asking: Who do I report this to? Will anyone believe me? Will the people who did this ever face consequences? Or will the blame land on me?
This is the reality for thousands of women and girls every single day. AI deepfakes are destroying real lives and justice remains out of reach for most survivors.
Her story could be yours.
Deepfake abuse is the sharp edge of a much broader pattern of digital violence targeting women and girls. It's gendered and it's escalating. Right now, the systems designed to protect people are failing, while the tools to cause harm become cheaper, faster and easier to use every day.
Deepfakes are images, audio or videos manipulated by artificial intelligence (AI) that make it appear someone said or did something they never did.
The technology itself isn't new, but its weaponisation against women and girls is a newer phenomenon, and it's accelerating fast.
Underreporting is one of the biggest barriers to accountability. For survivors who do come forward, the justice system often becomes another source of trauma.
Despite the scale of harm, prosecutions are rare, platforms routinely fail to act and survivors are often re-traumatised when they try to seek help. Here's why:
The law hasn't caught up as less than half of countries have laws that address online abuse and even fewer have legislation that specifically covers AI-generated deepfake content
Enforcement is lagging because even when laws exist, investigators need digital forensics expertise, cross-border coordination and platform cooperation to build a case while most justice systems don't have adequate resources for any of these
Tech platforms are failing survivors as they have long hidden behind "intermediary" status to avoid responsibility for user-generated content.
tech companies often have opaque and inconsistent reporting processes, give automated rejection of takedown requests and offer little to no cooperation with law enforcement
While there are a number of nations and regions taking action (see text box below), stopping deepfake abuse requires urgent, coordinated action from governments, institutions and tech platforms.
Here are five things that need to happen:
Governments must pass legislation with clear definitions of AI-generated abuse and focusing on consent, strict liability for perpetrators, fast-track removal obligations for platforms and cross-border enforcement protocols.
Law enforcement needs training, resources and dedicated capacity to collect and preserve digital evidence while digital forensics backlogs are addressed, with international cooperation frameworks becoming fast, functional and fit for purpose.
Tech companies must be legally required to proactively monitor for and remove abusive content within mandatory timelines, cooperate with law enforcement and face real financial consequences when they fail to act.
Trained, trauma-informed law enforcement and legal professionals and free legal aid should be available.
Digital literacy, including consent education, online safety, and what to do when experiencing abuse, needs to start young and reach everyone as prevention is as important as prosecution.
UN Women warns this is not a niche internet problem: "It is a global crisis."
Meanwhile, a handful of jurisdictions are starting to act: