Claude Code Source Code Leaked: AI Pet and Always
agents anthropic claude
| Source: Mastodon | Original article
Anthropic’s “Claude Code” repository was exposed again, this time through a mis‑configured npm package that published the entire TypeScript codebase to the public registry. Anyone running a plain `npm install` now pulls more than 1,900 original source files straight into their `node_modules` folder, a repeat of the February 2025 breach that forced the company to pull the package and issue an emergency fix.
The freshly uncovered files go beyond routine utilities. Embedded in the client library is a “tamagotchi‑style” AI pet that attempts to keep users engaged by reacting to their prompts, as well as an “AlwaysOnAgent” component that can maintain persistent background sessions without explicit user activation. Both features were never announced and were hidden behind internal feature flags, suggesting Anthropic was experimenting with long‑term, context‑aware assistants and gamified interaction models.
The leak matters on three fronts. First, it reveals proprietary design choices that competitors can now copy or weaponise, eroding Anthropic’s technical edge. Second, the AlwaysOnAgent raises privacy questions: a continuously running agent could collect data across sessions, and its undisclosed presence may conflict with enterprise compliance policies. Third, the recurrence of a packaging error signals systemic lapses in Anthropic’s release engineering, potentially shaking confidence among developers who rely on Claude Code for production workloads.
What to watch next: Anthropic has pledged an “immediate audit” and promises a patched npm release within days, but the speed and transparency of that response will be scrutinised. Legal teams may assess liability for the repeated exposure of confidential code. Meanwhile, the open‑source community is already forking the leaked repository, sparking debates about responsible disclosure and whether the AI pet or AlwaysOnAgent will surface in third‑party tools. Follow‑up coverage will track Anthropic’s remediation steps, any regulatory fallout, and how the newly visible features shape the next generation of AI assistants.
Sources
Back to AIPULSEN