Anthropic shut down a 60 account company's Claude access
anthropic claude
| Source: HN | Original article
Anthropic abruptly terminated access to more than 60 Claude accounts belonging to Argentine fintech Belo, sparking a public outcry from the company’s chief technology officer, Patricio “Pato” Molina. In a post on X, Molina shared a screenshot of an email from Anthropic stating that “our automated systems detected a high volume of signals associated with your account which violate our Usage Policy,” but offered no details on the alleged breach and provided only a generic Google‑form for appeals.
The shutdown crippled Belo’s internal workflows, which rely on Claude for everything from customer‑service automation to risk‑analysis modeling. The fintech’s engineering team reported that the suspension took effect without prior warning, leaving developers unable to access critical AI‑driven tools across the organization. Molina warned other software firms to “never put all your eggs in one basket,” highlighting the vulnerability of heavy reliance on a single LLM provider.
The incident matters because it underscores the opaque nature of AI service providers’ enforcement mechanisms. Anthropic’s usage‑policy enforcement has previously drawn scrutiny after reports of a “spyware bridge” installed on user machines, and the company’s rapid, automated account closures raise questions about due‑process and the adequacy of recourse for enterprise customers. For fintechs handling sensitive financial data, sudden loss of AI capabilities can translate into operational risk, compliance headaches, and potential revenue loss.
What to watch next: Anthropic’s legal team is expected to respond, possibly clarifying the policy triggers that led to the mass suspension. Industry observers will monitor whether regulators intervene, especially in the EU’s upcoming AI Act framework. Meanwhile, fintechs and other enterprises are likely to accelerate diversification strategies, integrating alternative LLMs such as Claude Design, OpenAI’s GPT‑4o, or local European models to mitigate single‑vendor risk. The episode may also prompt broader discussions on transparent AI governance and standardized appeal processes across the sector.
Sources
Back to AIPULSEN