NSA Uses Anthropic’s Mythos Despite Blacklist
anthropic
| Source: HN | Original article
The National Security Agency has begun deploying Anthropic’s “Mythos Preview” model even though the Department of Defense formally labeled the technology a supply‑chain risk and placed it on a blacklist last month. According to multiple reports, the NSA is using the AI primarily to scan its own networks for exploitable vulnerabilities, a use case that mirrors how other cleared entities are leveraging the model for internal security audits.
Anthropic launched Mythos as a specialized cybersecurity assistant, touting its ability to parse code, identify misconfigurations and suggest remediation steps at a speed far beyond human analysts. The Pentagon’s designation, however, stems from concerns that the model’s training data and underlying architecture could be compromised by hostile actors, a risk amplified by the agency’s reliance on third‑party cloud services. By sidestepping the blacklist, the NSA signals a willingness to prioritize operational advantage over the emerging supply‑chain safeguards that the DoD is trying to enforce.
The move matters for several reasons. First, it underscores a growing tension between rapid AI adoption in intelligence work and the nascent regulatory framework meant to curb potential backdoors. Second, it raises questions about inter‑agency coordination: if the NSA can ignore a DoD directive, other departments may follow suit, eroding the authority of the blacklist. Finally, the decision adds weight to earlier warnings from finance ministers and top bankers, who have flagged Mythos as a systemic risk, and from security scholars such as Bruce Schneier, who warned that unchecked AI tools could become a new attack surface.
Watch for a formal response from the Office of the Secretary of Defense, which may tighten enforcement or issue new guidance on AI procurement. Congressional committees are likely to summon both the NSA and Anthropic for testimony, and any legal challenge to the blacklist could set a precedent for how AI models are governed across the federal landscape. The episode also puts pressure on Anthropic to resolve its ongoing legal battles and clarify the provenance of Mythos’s training data, a factor that could determine whether the model remains a contested asset or is finally pulled from government use.
Sources
Back to AIPULSEN