Anthropic Expands Claude Into Code Security With AI Reasoning
The new feature scans entire codebases, flags complex vulnerabilities and suggests patches for human review, moving beyond rule-based detection toward AI reasoning.
Topics
News
- Google VP Predicts Shakeout: Two Kinds of AI Startups May Not Last
- Anthropic Expands Claude Into Code Security With AI Reasoning
- Middle East Asserts Strategic Clout at India AI Impact Summit 2026
- OpenAI Nears $100B Raise at $850B Valuation in Historic Funding Push
- Saudi Arabia AI Firm Humain Invests $3bn in Musk’s xAI
- Perplexity Rolls Back AI Ads to Prioritize User Experience
Anthropic has expanded its Claude platform into software security, introducing a capability that scans entire codebases for vulnerabilities and proposes targeted fixes for human review.
The feature, dubbed Claude Code Security, was announced on Friday, 20 February, and is available in a limited research preview.
Anthropic said the feature is built into Claude Code on the web and is designed to help teams find and fix security issues that traditional approaches can miss.
Indian IT shares extended declines on Monday as investors continued to weigh the potential impact of artificial intelligence on traditional IT services. The Nifty IT index fell 1.4%, while Infosys and Wipro both dropped about 1.9%.
In the US, cybersecurity stocks had fallen sharply on Friday after Anthropic unveiled the tool, with investors and analysts suggesting the move reflected concern about how AI-driven code analysis could affect traditional vulnerability-scanning tools, Bloomberg reported.
Anthropic framed the product as a response to a capacity gap in security teams.
“Security teams face a common challenge: too many software vulnerabilities and not enough people to address them,” the company said, arguing that conventional tools typically look for known patterns while subtle, context-dependent vulnerabilities often require skilled human researchers.
The company said Claude Code Security is intended to move beyond pattern matching.
“Rather than scanning for known patterns, Claude Code Security reads and reasons about your code the way a human security researcher would,” it said, describing analysis that traces component interactions and how data moves through an application.
Anthropic also said the system re-checks findings to cut false positives and assigns severity ratings, while keeping humans in control.
“Nothing is applied without human approval,” the company said.
Anthropic warned that stronger AI capabilities cut both ways. The same techniques that help defenders find and fix vulnerabilities could also help attackers discover weaknesses faster, it said, adding that AI is likely to scan a significant share of the world’s code in the future.
Claude Code Security remains in limited preview for Enterprise and Team customers, with expedited access for open-source developers.



