Claude Code bug can silently 10-20x API costs
claude
| Source: HN | Original article
A hidden bug in Anthropic’s Claude Code has been confirmed to inflate API usage by ten‑to‑twenty times, turning what should be a modest monthly bill into a costly surprise for developers. The flaw, discovered by a team of independent auditors after a client’s usage spike from the expected $20‑$100 range to over $2,000 in a single week, stems from the model’s automatic context expansion. When Claude Code is prompted to “load the entire codebase,” it silently pulls in every file, pushing token counts from the usual 50‑100 K to 500 K or more per request. Because Anthropic charges per 1 K tokens, the inflated payload translates directly into a steep price hike that can go unnoticed until the next billing cycle.
The issue matters because Claude Code has become a cornerstone of AI‑assisted development in the Nordics, especially among startups that rely on its VS Code plug‑in for on‑the‑fly code suggestions. The bug not only threatens budgets but also erodes trust in the platform’s cost‑predictability, a key selling point after Anthropic’s recent “Universal Claude” token‑efficiency tool cut AI expenses by 63 % earlier this month. Developers who have already adopted the Pro tier at $20 per month may find themselves unintentionally upgraded to the Max 20× plan, which costs $200, without realizing the trigger.
Anthropic has issued a patch that disables automatic full‑project loading unless explicitly authorized, and it promises a retroactive credit for affected accounts. The company also announced a new usage‑monitoring dashboard that flags sessions exceeding 200 K tokens. Watch for the rollout of this dashboard over the next two weeks, and for any follow‑up guidance from the European Union’s AI regulatory body, which is expected to scrutinise opaque pricing mechanisms in AI‑as‑a‑service offerings. As we reported on March 31, tools that improve token efficiency are only valuable if the underlying models behave transparently; this episode underscores the need for tighter safeguards.
Sources
Back to AIPULSEN