05/08/2025 | Press release | Distributed by Public on 05/08/2025 10:21
It was a slow Friday afternoon in July when a seemingly isolated problem appeared on the radar of Phillip Misner, head of Microsoft's AI Incident Detection and Response team. Someone had stolen a customer's unique access code for an AI image generator and was going around safeguards to create sexualized images of celebrities.
Misner and his coworkers revoked the code but soon saw more stolen customer credentials, or API keys, pop up on an anonymous message board known for spreading hateful material. They escalated the issue into a company-wide security response in what has now become Microsoft's first legal case to stop people from creating harmful AI content.
"We take the misuse of AI very seriously and recognize the harm of abusive images to victims," Misner says.
Court documents detail how Microsoft is dismantling a global network alleged to have created thousands of abusive AI images of celebrities, women and people of color. Many of the images were sexually explicit, misogynistic, violent or hateful.
The company says the network, dubbed Storm-2139, includes six people who built tools to break into Azure OpenAI Service and other companies' AI platforms in a "hacking-as-a-service scheme." Four of those people - located in Iran, England, Hong Kong and Vietnam - are named as defendants in Microsoft's civil complaint filed in the U.S. District Court for the Eastern District of Virginia. The complaint alleges another 10 people used the tools to bypass AI safeguards and create images in violation of Microsoft's terms of use.
"This case sends a clear message that we do not tolerate the abuse of our AI technology," says Richard Boscovich, assistant general counsel for the company's Digital Crimes Unit (DCU). "We are taking down their operation and serving notice that if anyone abuses our tools, we will go after you."
Keeping people safe online
The lawsuit is part of the company's longtime work in fostering digital safety, from responding to cyberthreats and disrupting criminals to building safe and secure AI systems. The efforts include working with lawmakers, advocates and victims to protect people from explicit images shared without their consent - regardless of whether the images are real, or made or modified with AI.
"This kind of image abuse disproportionately targets women and girls, and the era of AI has fundamentally changed the scale at which it can happen," says Courtney Gregoire, vice president and chief digital safety officer at Microsoft. "Core to our approach in digital safety is listening to those who've been impacted negatively by technology and taking a multi-layered approach to mitigate harm."
Soon after DCU filed its initial complaint in December, it seized a website, blocked the activity and continued building its case. The lawsuit prompted network members to turn on each other, share the case lawyers' emails and send anonymous tips casting blame. That helped investigators name defendants in court in a public strategy to deter other AI abusers. An amended complaint in February led to more network chatter and evidence for the team's ongoing investigation.
"The pressure heated up on this group, and they started giving up information on each other," says Maurice Mason, a principal investigator with DCU.