US Treasury seeks access to Anthropic's Mythos to spot flaws
anthropic
| Source: HN | Original article
The U.S. Treasury Department’s technology team has asked Anthropic PBC for direct access to its Mythos large‑language model so analysts can probe the system for software vulnerabilities, a Bloomberg source said. The request, confirmed by multiple outlets, comes as the Treasury’s Office of Cybersecurity and Infrastructure Security (OCIS) expands its mandate to audit high‑risk AI tools that could be weaponised or used to undermine financial stability.
Anthropic, which unveiled Mythos in early 2024 as a “cyber‑ready” model capable of code generation, threat‑intel synthesis and red‑team style reasoning, has already attracted scrutiny. As we reported on 14 April 2026, an independent evaluation highlighted the model’s ability to devise sophisticated attack vectors, raising concerns about accidental or intentional misuse. The Treasury’s move signals that regulators are now treating advanced foundation models as critical infrastructure rather than mere software products.
The request is also notable for its timing. Anthropic announced last week that Silvio Napoli, former chief executive of the Schindler Group, will become its permanent CEO, suggesting a strategic shift toward more corporate governance and possibly greater openness to government collaboration. If the Treasury secures access, it could set a precedent for other agencies—such as the Cybersecurity and Infrastructure Security Agency (CISA) or the Department of Justice—to demand similar audits, potentially leading to a formal framework for AI security certifications.
What to watch next: Anthropic’s response, including any conditions it places on access or data handling; whether the Treasury issues a formal subpoena or a voluntary partnership agreement; and any legislative proposals that would codify government AI oversight. Parallel developments at OpenAI, which recently rolled out its own cybersecurity model, will likely be compared to the Mythos audit, shaping the broader policy debate on safeguarding powerful AI systems.
Sources
Back to AIPULSEN