OpenAI Inc.

02/05/2026 | News release | Distributed by Public on 02/05/2026 12:25

Introducing Trusted Access for Cyber

February 5, 2026

SecuritySafety

Introducing Trusted Access for Cyber

Our approach to enhancing baseline safeguards for all users while piloting trusted access for defensive acceleration.

Loading…
Share

GPT-5.3-Codexis our most cyber-capable frontier reasoning model to date. Cybersecurity is one of the clearest places where that progress can both meaningfully strengthen the broader ecosystem and introduce new risks. We've moved from models that can auto-complete a few lines in a code editor, to models that can work autonomously for hours or even days to accomplish complex tasks. These capabilities can dramatically strengthen cyber defense by accelerating vulnerability discovery and remediation.

To unlock the full defensive potential of these capabilities while reducing the risk of misuse, we are piloting Trusted Access for Cyber: an identity and trust-based framework designed to help ensure enhanced cyber capabilities are being placed in the right hands. This reflects our broader approach to responsibly deploying highly capable models. In addition, we are committing $10 million in API credits to accelerate cyber defense.

Expanding access to frontier models for cyber defense

It is very important the world adopts frontier cyber capabilities quickly to make software more secure and continue to raise the bar of security best practices. Highly capable models can help organizations of all sizes strengthen their security posture, reduce response times, and improve resilience, while enabling security professionals to better detect, analyze, and defend against the most severe and targeted attacks. These advances have the potential to meaningfully raise the baseline of cyber defense across the ecosystem if they are put to work in the hands of people focused on protection and prevention.

There will soon be many cyber-capable models with broad availability from different providers, including open-weight models, and we believe it is critical that OpenAI's models strengthen defensive capabilities from the outset. This is why we are launching a trust-based access pilot that prioritizes getting our most capable models and tools in the hands of defenders first.

It can be difficult to tell whether any particular cyber action is intended for defensive usage, or to cause harm. For example, "find vulnerabilities in my code" could be part of responsible patching and coordinated disclosure-or it could be used to identify software vulnerabilities to help exploit a system. Because of that ambiguity, restrictions intended to prevent harm have historically created friction for good-faith work. Our approach aims to reduce that friction while still preventing malicious activity.

Trust-based approach to frontier cyber capabilities

Frontier models like GPT-5.3-Codex have been designed with mitigations like training the model to refuse clearly malicious requests like stealing credentials. In addition to safety training, automated classifier-based monitors will detect potential signals of suspicious cyber activity. Developers and security professionals doing cybersecurity-related work may be impacted by these mitigations, while we calibrate our policies and classifiers.

To use models for potentially high-risk cybersecurity work:

Security researchers and teams who may need access to even more cyber capable or permissive models to accelerate legitimate defensive work can express interest in our invite-only program (opens in a new window). Users with trusted access must still abide by our Usage Policies and Terms of Use .

This approach is designed to reduce friction for defenders while preventing prohibited behavior, including data exfiltration, malware creation or deployment, and destructive or unauthorized testing. We expect to evolve our mitigation strategy and Trusted Access for Cyber over time based on what we learn from early participants.

Scaling the Cybersecurity Grant Program

To further accelerate the use of our frontier models for defensive cybersecurity work, we are committing $10 million in API credits for teams through our Cybersecurity Grant Program. We're looking to partner with teams that have a proven track record of identifying and remediating vulnerabilities in open source software and critical infrastructure systems-teams can apply here .

Author

OpenAI
OpenAI Inc. published this content on February 05, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on February 05, 2026 at 18:25 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]