OpenAI identifies security issue involving third-party tool, says user data was not accessed OpenA
apple google openai
| Source: Mastodon | Original article
OpenAI disclosed on Friday that a security flaw in a third‑party developer tool called Axios had briefly compromised the process it uses to certify macOS applications as legitimate. The company said the issue was discovered during an internal audit of its code‑signing pipeline and that no user data – including chat histories, API keys or personal identifiers – was accessed or exfiltrated. OpenAI has already pushed an updated code‑signing certificate and is urging macOS users to download the latest version of its ChatGPT, Whisper and DALL‑E apps.
The incident matters because it highlights the growing vulnerability of AI firms to supply‑chain attacks. Axios, a widely adopted build‑automation utility, was implicated in a broader industry breach earlier this month that saw malicious actors inject code into software distribution channels. While OpenAI’s audit found no evidence of data theft, the compromised signing process could have allowed a maliciously altered binary to reach users, potentially opening a backdoor for future exploits. The episode adds to a string of security concerns that have surrounded the company in recent weeks, from physical attacks on its CEO’s residence to internal reports of leadership turmoil.
OpenAI says it has isolated the affected component, revoked the compromised certificate and is working with Apple to ensure the updated apps pass the App Store’s verification checks. Observers will watch for a formal security advisory from Apple, any follow‑up disclosures from the Axios maintainers, and whether other AI startups that rely on the same tool will issue similar patches. The broader AI community is also likely to intensify scrutiny of third‑party dependencies, prompting tighter supply‑chain audits and possibly new industry standards for code‑signing integrity.
Sources
Back to AIPULSEN