Anthropic Launches AI Code Review That Uses Parallel Agents
Anthropic's new Code Review feature sends multiple AI agents to inspect your pull requests, catching bugs human reviewers miss.
Anthropic Launches AI Code Review That Uses Parallel Agents
Anthropic has launched Code Review for Claude Code, a new feature that dispatches multiple AI agents to inspect pull requests on GitHub. Rather than running a single pass over your code, it sends specialised agents to examine different angles of a change in parallel, then validates and ranks their findings before posting comments.
How It Works
When a pull request is opened, the system kicks off several Claude agents simultaneously. Each looks at the code from a different perspective — logic errors, security gaps, parameter handling, cross-file regressions. A separate “critic” component then checks the agents’ findings, filters out noise, and ranks the remaining issues by severity before posting them as GitHub comments.
The key difference from existing linting tools is what Anthropic calls “repository-aware reasoning.” Instead of flagging style violations, Code Review tries to understand your codebase’s architecture and catch the kind of subtle bugs that slip through — like a function change in one file breaking something three files away.
The Numbers
Anthropic tested Code Review internally before launch. The results: 54% of pull requests now receive substantive review comments, up from just 16% with their previous approach. Engineers marked fewer than 1% of the AI’s findings as incorrect — meaning nearly every flagged issue was worth looking at.
Each review costs between $15 and $25 and takes around 20 minutes to complete. Human developers still make all merge decisions.
Getting Started
Code Review is available now as a research preview for Claude Team and Claude Enterprise customers. Setup involves installing the Anthropic GitHub app, connecting your repositories, and optionally adding a REVIEW.md file to guide what the reviewer focuses on.
What This Means for You
If you’re writing code — whether professionally or as a learner — AI-powered code review is becoming a practical reality. This isn’t a replacement for human reviewers, but a second pair of eyes that never gets tired and can spot patterns across thousands of lines of code.
For teams already using Claude Code for writing code, having the same AI review that code creates a tighter feedback loop. And as AI-generated code becomes more common, having AI check AI’s work is quickly becoming a necessity rather than a luxury.
Want to keep learning?
Explore our guided learning paths or try building something with AI right now.
More from News
Adobe Firefly Custom Models — Train AI on Your Own Art Style
Adobe Firefly Custom Models — Train AI on Your Own Art Style
Adobe's Firefly Custom Models hit public beta on March 19, letting any creator train a personal AI on their own images. Here's how it works.
Tencent Is Building AI Agents Into WeChat for 1.4 Billion Users
Tencent Is Building AI Agents Into WeChat for 1.4 Billion Users
Tencent confirmed plans to embed AI agents in WeChat that can hail rides, shop, and book restaurants — bringing agentic AI to the world's largest super-app.
OpenAI's GPT-5.4 Mini Is Now Free — and It's Surprisingly Good
OpenAI's GPT-5.4 Mini Is Now Free — and It's Surprisingly Good
OpenAI launched GPT-5.4 mini and nano, bringing near-flagship AI performance to free ChatGPT users and developers at a fraction of the cost.
Enjoyed this article?
Subscribe for more AI insights delivered to your inbox every week.