Auto mode for Claude Code | Claude
ai-safety anthropic claude
| Source: Mastodon | Original article
Anthropic has rolled out “Auto Mode” for Claude Code, its AI‑driven development assistant, turning a long‑standing permission prompt into a self‑serving safety layer. The new mode deploys an on‑device classifier that evaluates each command—such as file writes, package installations or system calls—and automatically approves those deemed low‑risk while still surfacing higher‑impact actions for human review. Developers can toggle the feature in the Claude Code settings, and the system logs every auto‑approved operation for auditability.
The launch marks a shift from the manual “yes/no” dialogs that many users complained slowed down workflows. By handling routine permissions in the background, Auto Mode promises to cut the friction that has hampered large‑scale adoption of AI‑assisted coding tools, especially in fast‑moving teams that need to iterate quickly. At the same time, Anthropic positions the classifier as a safeguard against the “AI coding disasters” that have sparked headlines when LLMs execute destructive commands or expose sensitive data. The company frames the feature as a middle ground between the default prompt‑heavy configuration and the risky practice of disabling permissions altogether.
As we reported on March 25, 2026, Claude Code already had the ability to take over a developer’s workstation; today the functionality is wrapped in a safety‑first interface that could set a new industry benchmark. The move also dovetails with Anthropic’s broader suite of updates, including Claude Code Review, a multi‑agent bug‑screening tool, and Dispatch for Cowork, which lets users hand off tasks from mobile devices.
What to watch next: early adoption metrics and feedback from enterprise pilots will reveal whether the classifier strikes the right balance between speed and security. Competitors such as OpenAI and Google are expected to announce comparable permission‑automation features, potentially sparking a race to embed safety into the core of AI‑coding workflows. Regulators may also scrutinise how these classifiers are trained and validated, especially if they become the default gatekeeper for code that touches production systems.
Sources
Back to AIPULSEN