01/10/2025 | Press release | Distributed by Public on 01/10/2025 16:34
Microsoft's Digital Crimes Unit (DCU) is taking legal action to ensure the safety and integrity of our AI services. In a complaint unsealed in the Eastern District of Virginia, we are pursuing an action to disrupt cybercriminals who intentionally develop tools specifically designed to bypass the safety guardrails of generative AI services, including Microsoft's, to create offensive and harmful content. Microsoft continues to go to great lengths to enhance the resilience of our products and services against abuse; however, cybercriminals remain persistent and relentlessly innovate their tools and techniques to bypass even the most robust security measures. With this action, we are sending a clear message: the weaponization of our AI technology by online actors will not be tolerated.
Microsoft's AI services deploy strong safety measures, including built-in safety mitigations at the AI model, platform, and application levels. As alleged in our court filings unsealed today, Microsoft has observed a foreign-based threat - actor group develop sophisticated software that exploited exposed customer credentials scraped from public websites. In doing so, they sought to identify and unlawfully access accounts with certain generative AI services and purposely alter the capabilities of those services. Cybercriminals then used these services and resold access to other malicious actors with detailed instructions on how to use these custom tools to generate harmful and illicit content. Upon discovery, Microsoft revoked cybercriminal access, put in place countermeasures, and enhanced its safeguards to further block such malicious activity in the future.
This activity directly violates U.S. law and the Acceptable Use Policy and Code of Conduct for our service s . Today's unsealed court filings are part of an ongoing investigation into the creators of these illicit tools and services. Specifically, the court order has enabled us to seize a website instrumental to the criminal operation that will allow us to gather crucial evidence about the individuals behind these operations, to decipher how these services are monetized , and to disrupt additional technical infrastructure we find. At the same time, we have added additional safety mitigations targeting the activity we have observed and will continue to strengthen our guardrails based on the findings of our investigation.
Every day, individuals leverage generative AI tools to enhance their creative expression and productivity. Unfortunately, and as we have seen with the emergence of other technologies, the benefits of these tools attract bad actors who seek to exploit and abuse technology and innovation for malicious purposes. Microsoft recognizes the role we play in protecting against the abuse and misuse of our tools as we and others across the sector introduce new capabilities. Last year, we committed to continuing to innovate on new ways to keep users safe and outlined a comprehensive approach to combat abusive AI-generated content and protect people and communities. This most recent legal action builds on that promise.
Beyond legal actions and the perpetual strengthening of our safety guardrails, Microsoft continues to pursue additional proactive measures and partnerships with others to tackle online harms while advocating for new laws that provide government authorities with necessary tools to effectively combat the abuse of AI , particularly to harm others. Microsoft recently released an extensive report , " Protecting the Public from Abusive AI-Generated Content ," which set s forth recommendations for industry and government to better protect the public, and specifically women and children, from actors with malign motives.
For nearly two decades, Microsoft's DCU has worked to disrupt and deter cybercriminals who seek to weaponize the everyday tools consumers and businesses have come to rely on. Today, the DCU builds on this approach and is applying key learnings from past cybersecurity actions to prevent the abuse of generative AI. Microsoft will continue to do its part by looking for creative ways to protect people online, transparently reporting on our findings, taking legal action against those who attempt to weaponize AI technology, and working with others across public and private sectors globally to help all AI platforms remain secure against harmful abuse.
Tags: AI, cybercrime, cybercriminals, cybersecurity, DCU, generative ai, Microsoft Digital Crimes Unit