OpenAI Inc.

02/25/2026 | News release | Distributed by Public on 02/25/2026 08:16

Disrupting malicious uses of AI

February 25, 2026

Security

Disrupting malicious uses of AI

Our latest report featuring case studies of how we're detecting and preventing malicious uses of AI.

Share

In the two years since we began publishing these threat reports, we have gained important insights into the ways threat actors attempt to abuse AI models. In particular, the case studies in this report, as in our earlier reports, illustrate how threat actors typically use AI in combination with other, more traditional tools such as websites and social media accounts. Threat activity is seldom limited to one platform; as our report on a Chinese influence operator shows, it is not always limited to one AI model. Rather, threat actors may use different AI models at various points in their operational workflow. We share these insights in our threat reports so that our industry, and wider society, can be better placed to identify and avoid such threats.

Read the full report here (opens in a new window).

Author

OpenAI
OpenAI Inc. published this content on February 25, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on February 25, 2026 at 14:16 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]