NEW YORK, Feb. 04, 2026 (GLOBE NEWSWIRE) -- Qodo today announced the second generation of its AI code review platform, built to turn high-velocity code generation into high-quality software while establishing AI code review as trust and governance infrastructure. With the release of Qodo 2.0, the company introduces significant advancements in context engineering and multi-agentic architecture to help close critical gaps in managing and enforcing AI code quality at scale. This major evolution is validated by a new industry benchmark showing Qodo 2.0 delivers the highest precision and recall for finding critical issues and rule violations, outperforming other code review tools by 11%.
AI-assisted development has moved from experiment to the new reality for enterprise software development. Gartner projects that by 2028, 90% of enterprise software engineers will use AI code assistants, up from less than 14% in 2024. Yet trust remains limited. In Qodo’s own State of AI Code Quality report, 46% of developers say they actively distrust the accuracy of AI-generated code, and 60% who use AI for writing, testing, or reviewing code report that these tools often miss critical context. These first-generation AI review tools struggle to distinguish critical issues from trivial suggestions, creating developer fatigue and inconsistent standards enforcement.

To win developers' trust, Qodo 2.0 introduces a multi-agent system for AI code review. Instead of relying on a single generalist agent, Qodo breaks reviews into focused tasks handled by a mixture of expert agents. Each agent uses advanced context engineering to pull relevant information across codebases, past pull requests, and prior review decisions, delivering more accurate findings and actionable guidance.
"AI speed doesn't matter if you can't trust what you're shipping,” said Itamar Friedman, CEO and co-founder of Qodo. “Enterprises need AI code review that verifies for quality and catches actual problems, not generalist models that flag everything and don't have enough context to make findings relevant and actionable. Qodo 2.0 bridges this gap, setting a new standard for how enterprises build with AI."
To prove these gains are real, Qodo developed a new industry benchmark that evaluates AI code review performance in finding critical issues and rule violations. The benchmark tests AI code review tools on pull requests from active open source repos, injected with real-world bugs. Results show that Qodo 2.0 delivers the highest precision and recall, outperforming alternatives by 11%.
Today, organizations such as Monday.com, Box, and others are using Qodo 2.0 to manage high-velocity AI-assisted development at scale. Qodo 2.0 is available today. Additional information on the benchmark methodology, evaluated tools, and results is available here.
About Qodo
Qodo is an AI code review platform built to turn today’s high-velocity code generation into high-quality software, serving as trust and governance infrastructure for enterprise engineering teams. With Qodo 2.0, Qodo introduces advanced context engineering and a multi-agent review system that draws on full-repository signals (including codebase history and prior PR decisions) to deliver more accurate, explainable, and actionable feedback while reducing noise and enforcing organization-specific standards. Founded in 2018, Qodo has raised $50 million, backed by TLV Partners, Vine Ventures, Susa Ventures, Square Peg, and angel investors including executives from OpenAI, Shopify, and Snyk.
For more information, visit www.qodo.ai.
Media Contact:
Janabeth Ward
Scratch Marketing + Media for Qodo
qodo@scratchmm.com
A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/081560fc-4e43-4f89-849b-ddade6e83832
