Full analysis. Triggered only for MEDIUM+ risk PRs. Expensive but high-value. Each review category runs as an independent Review Lens, a separate prompt with its own focus area and output schema. Lenses run in parallel.
Input
Everything from previous layers, plus:
- Full files (not just diff): AI sees the entire module
- Related files: files imported by changed files (from dependency graph)
- Conventions doc: extracted from repo conventions, architecture docs, ADRs
- Historical context: how this module evolved (from repository knowledge layer)
Review Lenses (Visdom Standard)
| Lens | Focus | When active |
|---|---|---|
| Security | Injection, auth bypass, data exposure, OWASP Top 10 | Always on MEDIUM+ |
| Performance | N+1 queries, unnecessary allocations, blocking calls, O(n^2) | MEDIUM+ with DB/API paths |
| Architecture | Consistency with repo patterns, separation of concerns, coupling | HIGH+ |
| Test Quality | New paths have tests, assertion quality, edge cases, circular test detection | Always on MEDIUM+ |
| AI-Code Safety | Over-engineering, hallucinated APIs, unnecessary abstractions, generic patterns | When AI-generated flag is set |
| Conventions | Naming, file structure, import patterns, error handling patterns | Always on MEDIUM+ |
Circular Test Detection
The Test Quality lens includes explicit detection of circular tests, tests derived from implementation rather than specification:
Examine new/modified tests. For each test, determine:
1. Does this test verify behavior described in a spec/issue/PR description?
OR does it mirror the implementation logic?
2. Does this test cover negative paths, edge cases, invalid inputs?
OR only the happy path that the implementation handles?
3. Would this test FAIL if the implementation had a subtle bug
(off-by-one, missing null check, wrong status code)?
OR would it pass because it tests the same logic?
Flag as CIRCULAR if the test would pass regardless of correctness. ✅ Why This Matters
Circular tests are the core failure mode of AI-generated test suites. They verify what code does, not what it should do. A circular test suite gives 100% coverage and 0% confidence.
Custom Lenses
Clients add their own Review Lenses via configuration. Each custom lens specifies a prompt file and activation conditions:
custom_lenses:
- name: "Banking Compliance"
prompt_file: "lenses/banking-compliance.md"
active_when: "paths match src/transactions/**"
- name: "GDPR Data Handling"
prompt_file: "lenses/gdpr.md"
active_when: "paths match src/user/** OR src/analytics/**" Output per Lens
Each lens produces structured findings in a consistent JSON schema:
{
"lens": "security",
"findings": [
{
"severity": "HIGH",
"file": "src/api/auth.ts",
"line": 42,
"category": "SQL Injection",
"description": "User input interpolated directly into query",
"suggestion": "Use parameterized query: db.query('SELECT ...', [userId])",
"confidence": 0.95
}
]
} Findings from all lenses are aggregated by the Reporter into a single structured PR comment with inline annotations.