Introducing Trusted Access for Cyber
anthropic openai
| Source: Mastodon | Original article
OpenAI unveiled a new “Trusted Access for Cyber” (TAC) framework on April 16, granting vetted cybersecurity teams entry to its most powerful models, including GPT‑5.3‑Codex and the freshly minted GPT‑5.4‑Cyber. The company frames the move as a safety‑first response to the belief that “our models are too dangerous to release as well,” opting for identity‑ and trust‑based vetting rather than open‑public rollout.
The program expands on OpenAI’s earlier limited‑access offerings, such as the life‑science‑focused GPT‑Rosalind announced on April 17, and mirrors the White House’s decision that same day to provide U.S. agencies with Anthropic’s Mythos model. By restricting frontier‑capability AI to verified defenders, OpenAI hopes to accelerate threat‑intelligence, automated incident response and vulnerability analysis while curbing the risk that the same tools could be weaponised by attackers.
Industry observers say the launch could reshape the cyber‑defence market. If the TAC model proves effective, enterprises may pressure rivals to adopt comparable trust layers, potentially standardising a new tier of “secure AI” services. At the same time, regulators are likely to scrutinise the vetting criteria, data‑handling obligations and liability frameworks that accompany such privileged access.
What to watch next: OpenAI’s rollout schedule and the specific eligibility thresholds for corporations, government bodies and managed‑security providers; any push‑back from civil‑rights groups concerned about opaque trust decisions; and whether the U.S. government will extend its own AI‑access programmes beyond Anthropic to include OpenAI’s TAC suite. The next few weeks will reveal whether trusted‑access models become the de‑facto conduit for AI‑driven cyber‑defence or remain a niche offering for a select few.
Sources
Back to AIPULSEN