03/25/2026 | News release | Distributed by Public on 03/25/2026 11:27
Testing for safety and abuse issues across OpenAI
Today, OpenAI is launching a public Safety Bug Bounty (opens in a new window)program focused on identifying AI abuse and safety risks across our products. As AI technology rapidly evolves, so do the potential ways it can be misused. Our goal is to ensure our systems remain safe and secure against misuse or abuse that could lead to tangible harm.
This new program will complement OpenAI's Security Bug Bounty (opens in a new window)by accepting issues that pose meaningful abuse and safety risks, even if they don't meet the criteria for a security vulnerability. Through this program, we look forward to continuing to partner with safety and security researchers to help us identify and address issues that fall outside conventional security vulnerabilities but still pose real risks. Submissions will be triaged by OpenAI's Safety and Security Bug Bounty teams, and may be rerouted between the two programs depending on scope and ownership.
The new Safety Bug Bounty (opens in a new window)program focuses on AI-specific safety scenarios listed below:
Agentic Risks including MCP
OpenAI Proprietary Information
Account and Platform Integrity
While jailbreaks are out of scope for this program, we periodically run private bug bounty campaigns focused on certain harm types, such as Biorisk content issues in ChatGPT Agent and GPT-5 . We invite interested researchers to apply to these programs when they arise.
Outside of the categories listed above, if researchers identify flaws that facilitate direct paths to user harm and actionable, discrete remediation steps, these may be considered in scope for rewards on a case-by-case basis. General content-policy bypasses without demonstrable safety or abuse impact are out of scope for this program. For example, "jailbreaks" that result in the model using rude language or returning information that is easily findable via search engines are out of scope.
Researchers interested in participating can apply through our Safety Bug Bounty (opens in a new window)program. We look forward to working alongside researchers, ethical hackers, and the safety and security community in the pursuit of a secure AI ecosystem.