What GPT-5.5-Cyber is and who gets it
OpenAI’s GPT-5.5-Cyber is a modified version of GPT-5.5 designed exclusively for cybersecurity professionals. Unlike the standard model, it removes safety guardrails that would otherwise restrict vulnerability identification, malware analysis, and patch validation workflows. Access is limited to participants in OpenAI’s Trusted Access for Cyber (TAC) program, which includes thousands of verified defenders and hundreds of teams responsible for critical infrastructure. Approved users must enable advanced account security for ChatGPT by June 1, 2026.
The model was previewed for U.S. government agencies, including the White House and the Commerce Department’s Center for AI Standards and Innovation, before its public announcement on May 7, 2026. OpenAI internally codenamed it "Spud" and describes it as a "more permissible version of GPT-5.5."
How it compares to Anthropic’s Mythos
GPT-5.5-Cyber enters a competitive landscape dominated by Anthropic’s Claude Mythos Preview, released in April 2026. Mythos was so effective at discovering and exploiting vulnerabilities that Anthropic restricted access to 40 organizations, including Apple, Amazon, and Microsoft, via Project Glasswing. The UK’s AI Security Institute evaluated both models and found they perform similarly on expert-level cyber tasks:
- GPT-5.5-Cyber: 71.4% average pass rate
- Claude Mythos: 68.6% average pass rate
Both models completed a complex corporate network attack simulation in a fraction of the time a human expert would need—estimated at 20 hours. The institute’s assessment suggests that cyber-offensive capabilities are emerging as a byproduct of general AI improvements in reasoning, coding, and long-horizon autonomy, rather than specialized training.
What defenders can do with it
GPT-5.5-Cyber builds on the capabilities of its predecessor, GPT-5.4-Cyber, which introduced binary reverse engineering and lowered refusal boundaries for security work. Key use cases for the new model include:
- Vulnerability identification: Automated scanning of codebases and networks for exploitable flaws.
- Malware analysis: Reverse engineering and behavioral analysis of malicious software.
- Patch validation: Testing security patches for effectiveness and unintended side effects.
- Autonomous attack simulation: Running red-team exercises to identify weaknesses in defenses.
The model’s autonomy allows it to operate with minimal human oversight, making it suitable for large-scale or time-sensitive security operations.
Tradeoffs and risks
OpenAI’s controlled release of GPT-5.5-Cyber reflects a broader industry trend: as AI models become more capable, their potential for both defensive and offensive cyber applications grows. The AI Security Institute warned that further increases in cyber capability are likely, given the rapid pace of general AI development. Key tradeoffs include:
- Access control: The model is restricted to vetted defenders, but the risk of leaks or misuse remains.
- Safety guardrails: Removing restrictions for legitimate security work could enable unintended offensive use if access is compromised.
- Arms race dynamics: As frontier labs release more capable models, adversarial actors may distill or replicate them for malicious purposes.
Anthropic’s head of cyber policy, Rob Bair, emphasized this risk at the AI+Expo, noting that "other frontier labs will come out with similar capabilities, which adversarial countries will then distill into other models that could be used against us."
How to get access
Defenders interested in GPT-5.5-Cyber must apply through OpenAI’s Trusted Access for Cyber (TAC) program. Requirements include:
- Verification of professional credentials and organizational affiliation.
- Compliance with OpenAI’s usage policies, including restrictions on offensive applications.
- Mandatory advanced account security for ChatGPT by June 1, 2026.
OpenAI has not disclosed the exact number of approved users but stated that the program has scaled to "thousands of verified defenders and hundreds of teams."
Bottom line
GPT-5.5-Cyber represents a significant step in AI-driven cybersecurity, offering vetted defenders a powerful tool for autonomous vulnerability hunting and patch validation. Its release underscores the growing role of frontier AI models in both defensive and offensive cyber operations. While the model’s restricted access mitigates some risks, the broader trend of AI-powered cyber capabilities demands careful oversight to prevent misuse. Defenders should monitor updates from OpenAI and the AI Security Institute for further developments in this space.