
In the fast-moving world of AI-powered development tools, Anthropic just dropped a feature that developers have been quietly begging for.
Calling it simply as the 'auto mode' for Claude Code, this new permissions mode lets Claude itself decide when to approve file edits, shell commands, and other actions during coding sessions, all while a built-in classifier runs safeguards in the background to keep things from going off the rails.
For anyone who has ever tried to tackle a big refactor or long-running build only to get interrupted every few minutes by yet another "approve this?" prompt, auto mode feels like a genuine breakthrough.
The problem it solves is painfully familiar.
Claude Code has always been deliberately conservative with permissions: every file write or bash command triggers a manual approval to prevent accidental disasters. Anthropic's original approach is great for safety, but kills momentum when users are deep in a complex task and just want to let the AI run. Some devs resorted to the nuclear option, the infamous --dangerously-skip-permissions flag, but that removed every guardrail and left projects vulnerable to destructive commands or unintended network calls.
Auto mode splits the difference beautifully.
New in Claude Code: auto mode.
Instead of approving every file write and bash command, or skipping permissions entirely, auto mode lets Claude make permission decisions on your behalf.
Safeguards check each action before it runs. pic.twitter.com/kHbTN2jrWw— Claude (@claudeai) March 24, 2026
When turned on, Claude can finally evaluate each proposed action on the fly using a separate classifier model (powered by the latest Sonnet 4.6 or Opus 4.6), automatically green-lighting safe, in-scope operations while blocking or rerouting anything risky like mass deletions, data exfiltration, or production deploys.
If it hits too many blocks, it falls back to asking users directly instead of guessing wrong.
But what makes this feel truly thoughtful is how the classifier actually works under the hood: the auto mode doesn't just scan for keywords. Instad, it automatically reviews the full conversation context, users' stated intent, and even any custom rules in their project's CLAUDE.md file. Safe stuff like editing files in the working directory, installing dependencies from a lockfile, or pushing to a dedicated branch sails through without a peep.
Riskier moves get stopped cold, and Claude intelligently tries an alternative path.
The result is fewer interruptions for the mundane work while still protecting against the scary stuff.
However, Anthropic is upfront to say that the auto mode is not foolproof. There can be occasional false positives or missed ambiguities that can happen, which is why the company strongly recommends users to run it in isolated environments like containers or VMs.
There will be a tiny uptick in token usage and latency from the extra checks, but for most long sessions it's a clear win in productivity.
To get started, users just need to open Claude, and in its Command Line Interface (CLI), they can simply enable it with the --enable-auto-mode flag.
Initially, the auto mode is made available in a research preview for Team plan users, with Enterprise and API support rolling out soon. Users can then hit Shift+Tab to cycle into auto mode during a session.
Desktop and VS Code users toggle it on in settings first, then pick it from the permission dropdown. It pairs perfectly with the recent wave of Claude Code updates (bigger context windows, better sub-agents, and checkpointing), which should make the whole tool feel far more like a true coding partner than a cautious assistant.
Before each tool call, a classifier reviews it for potentially destructive actions. Safe actions proceed automatically. Risky ones get blocked, and Claude takes a different approach.
This reduces risk but doesn't eliminate it. We recommend using it in isolated environments.— Claude (@claudeai) March 24, 2026
Following the release of Claude's own 'computer use' feature, this auto mode is very much Anthropic's direct response to real developer frustration and the exploding demand for agentic tools; the timing lines up with the broader industry shift toward AI that can actually act without constant babysitting.
Previously, the industry was disrupted by the arrival of OpenClaw, the viral open-source personal agent that's been making waves for its ability to run locally on users' machine, connects to models like Claude or GPT via users' own keys, and gives them full agentic freedom (shell commands, file ops, browser automation, even chatting through WhatsApp or Discord), but without any corporate middleman.
It's incredibly flexible and community-driven, but that same openness means users are responsible for all the safety boundaries yourself.
Claude Code's auto mode, by contrast, bakes those boundaries in at the model level with Anthropic's classifier doing the heavy lifting, trading a bit of that raw openness for more structured guardrails tailored specifically to coding workflows. Both push the envelope on what "AI that does things" can mean, but Anthropic wants to make things feel safer, enterprise-ready evolution of the same idea, and supervised autonomy rather than full local wildcard.
Overall, auto mode isn't flashy, but it's one of those quiet upgrades that could quietly change how a lot of people work. It keeps the safety-first ethos that defines Claude while finally giving developers the breathing room to trust the AI on bigger tasks.
Available now as a research preview on the Team plan. Enterprise and API access rolling out in the coming days.
Enable with claude --enable-auto-mode, then cycle to it with Shift+Tab.
Learn more: https://t.co/eOurVAnoKc— Claude (@claudeai) March 24, 2026