04/23/2026 | News release | Distributed by Public on 04/23/2026 12:19
As part of our ongoing efforts to strengthen our safeguards for advanced AI capabilities in biology, we're introducing a Bio Bug Bounty for GPT-5.5 and accepting applications. We're inviting researchers with experience in AI red teaming, security, or biosecurity to try to find a universal jailbreak that can defeat our five-question bio safety challenge.
Submit a short application here (opens in a new window) (name, affiliation, experience) by June 22, 2026. Accepted applicants and collaborators must have existing ChatGPT accounts to apply, and will sign a NDA. Apply now and help us make frontier AI safer.
If you're interested in supporting OpenAI's work to deliver safe and secure artificial intelligence beyond the Bio Bounty program, you can learn about our Safety Bug Bounty (opens in a new window) and Security Bug Bounty (opens in a new window) programs.