Visdom
Code Review

A multi-layered review process for enterprise teams shipping AI-generated code. Part of the Visdom AI-Native SDLC.

Part of Visdom · VirtusLab's AI-Native SDLC

L0 Triage
L1 Static
L2 Semantic
L3 Deep Analysis
R Reporter

The problem in numbers

Your CI says the code is fine. Your tests pass. But the AI wrote the tests too.

84%
of CI test failures are flaky, not real regressions
Google
45%
of AI-generated code contains OWASP security vulnerabilities
Veracode 2025
12.5%
of agent output that passes CI is still functionally wrong
Spotify
A different approach

A review process you deploy into your platform

VCR is not a SaaS product and not a vendor service. It's a review process with an open-source reference implementation — opinionated patterns, six runnable layers, and a pipeline you run inside your own CI/CD, against your own LLM provider, behind your own network boundary.

VirtusLab deploys VCR as part of Visdom engagements: we embed with your platform engineers, configure the pipeline for your stack, tune the lenses to your conventions, and hand it over. Capability transfer from day one. Your team owns and operates the process. No ongoing SaaS dependency.

Evaluation methodology →

Deployment model

SaaS review tools VCR (deployed)
Where it runs Vendor cloud Your CI/CD pipeline
LLM provider Vendor-chosen Your choice (Claude, GPT, self-hosted)
Code leaves your network Yes No — runs on your infra
Custom review rules Limited config Full lens customization (compliance, domain)
Cost model Per-seat: $12–40/dev/mo Your LLM costs only ($0–0.44/PR)
Ownership Vendor dependency Your team owns and operates
How it works

Layered review

Every PR passes through layers of increasing depth. Fast and cheap for trivial changes, thorough for risky ones. A LOW-risk PR gets feedback in under 2 minutes at ~$0.05.

Full architecture reference →
01 L0 Triage

Risk scoring, routing. Instant.

02 L1 Static

Linters, SAST, pattern checks. <30s.

03 L2 Semantic

LLM review with full context. <2min.

04 L3 Deep

Multi-pass analysis, security, arch. <5min.

Risk-based routing

Risk classification

Each PR gets a risk level based on path classification, diff size, coverage delta, and module stability. Only MEDIUM+ risk triggers deep analysis.

See before/after scenarios →
LOW

Config, docs, deps. Auto-approved or light scan.

MEDIUM

Business logic. Standard LLM review with context.

HIGH

Security-sensitive, cross-service. Multi-pass analysis.

CRITICAL

Auth, payments, data migration. Full depth + human gate.

Repository context

Context sources

Each review is fed pre-indexed knowledge about the codebase: ownership, dependencies, commit history, conventions, and test reliability data.

Explore ViDIA context engine →

Context sources

ViDIA Git Blame Coverage CODEOWNERS PR History Commit Heatmap + Your Source
What it catches

AI-code patterns

Patterns specific to AI-generated code that conventional CI and human reviewers typically miss. Each is a dedicated Review Lens in Layer 3.

See real examples →
01
Circular Tests

Tests that mirror implementation instead of verifying behavior.

02
Hallucinated APIs

Calls to methods or endpoints that don't exist in your codebase.

03
Convention Drift

AI-generated code that ignores your team's established patterns.

04
Over-engineering

Unnecessary Factory patterns, abstractions, and complexity.

SEE IT IN ACTION

Interactive demo — real PRs, real findings

VCR reviews its own codebase on every pull request. Trace the triage flow, see what each layer catches, and follow findings back to the GitHub PR.

vcr-grafana.fly.dev · Quality Pulse
Grafana Quality Pulse dashboard showing VCR metrics
Real output — no toys

What lands on your PR

Findings grouped by layer, linked to the exact line. Every comment names the rule, the risk level, and a concrete fix — not a vague hint.

  • Layer 1 — deterministic gate: secrets, forbidden patterns, hard rules
  • Layer 2 — AI quick scan: logic, reliability, security smell
  • Layer 3 — AI deep review: architecture, cache safety, edge cases
See the real PR this came from →
github.com · Pull Request · VCR commented
VCR GitHub PR comment showing findings grouped by layer with inline code annotations

Part of Visdom

VCR is one of four components in Visdom, VirtusLab's AI-Native SDLC.

Read the thinking behind it: The AI-Native SDLC series

Go deeper

Reference material, architecture docs, and real-world scenarios.

Common questions

See all 10 questions →
Run it yourself

Self-serve walkthrough on your machine

Clone the repo and run the pipeline locally against a deliberately flawed PR (auth service with 12 passing tests and 94% coverage). The full 4-layer review executes end-to-end. No API key needed — cached responses included.

Setup & commands →

Quick start

git clone https://github.com/VirtusLab/visdom-code-review
cd visdom-code-review/demo
npm install

# Narrated walkthrough (auto-paced)
npm run demo:narrate

# Interactive (press Enter to advance)
npm run demo:interactive

# Fast run (no narration)
npm run demo:local
Traditional review 0 findings · 24-48h · ~1h senior eng
VCR review 14 findings · 2 min · $0.44

Read the
full reference

Architecture, configuration, metrics framework, and reference implementations.