Anthropic’s Claude Code Security aims to find hidden software flaws faster than ever
San Francisco, USA, 23 February 2026 – As cyber threats grow more advanced, software security is becoming a race against time. In this environment, Anthropic has introduced a new tool designed to help developers stay ahead. Called Claude Code Security, the service uses artificial intelligence to scan codebases for security weaknesses and suggest how they can be fixed.
The tool has launched in a limited research preview and is currently available to Enterprise and Team customers, with free early access also offered to open-source maintainers.
What Claude Code Security does
Claude Code Security is built directly into Claude Code on the web. Its main purpose is simple: help teams find and fix security bugs faster.
Instead of relying only on traditional rule-based scanners, the AI reviews code more like a human security researcher. It follows how data moves through an application, understands how different components interact, and looks for complex issues that automated tools often miss.
When a potential problem is found, the system does not stop there. Each finding is checked through multiple verification steps. The AI reviews its own work, tries to confirm or disprove the issue, and filters out false alarms before showing results to users.
Clear results for human teams
All detected issues are displayed in a dashboard, where they are ranked by severity and confidence. This helps development teams focus first on the most serious risks. Importantly, the tool does not apply fixes automatically. Instead, it suggests targeted patches for human review, keeping people in control of final decisions.
This approach is designed to support security teams, not replace them.
Research behind the tool
Claude Code Security is the result of more than a year of research into AI-driven cybersecurity. Anthropic tested Claude’s abilities in security competitions and research environments, including work with Pacific Northwest National Laboratory.
Using Claude Opus 4.6, researchers were able to uncover more than 500 previously undetected bugs in open-source projects. These findings helped shape the defensive features now included in Claude Code Security.
Why this matters now and who can access it
Anthropic believes AI will soon play a major role in how the world secures software. As attackers use AI to search for weaknesses faster, defenders need equally powerful tools to respond.
By scanning large codebases quickly and catching long-hidden issues, AI tools like Claude Code Security could reduce the window of opportunity for cyberattacks and lower overall risk.
The research preview is open to Enterprise and Team customers, with Anthropic encouraging early feedback and collaboration. Open-source maintainers can also apply for free, expedited access, helping improve security across widely used projects.
The bigger picture
Claude Code Security reflects a broader shift in cybersecurity. AI is moving from an experimental aid to a practical defense tool used in real development workflows. While human oversight remains essential, AI-driven scanning could soon become a standard part of how software is built and protected.

