You've seen bots on PRs before. They post 40 comments, half are wrong, and you ignore all of them by the second week. VCR is built to be the opposite of that. Here's what actually changes in your day-to-day.
What you'll see on your PRs
VCR posts one summary comment per PR. Not fifteen. Not one per file. One comment, at the top, with everything you need.
Here's what it looks like:
Visdom Code Review | Risk: HIGH
Blocking (must fix):
1. CRITICAL | Security | auth.ts:42 | SQL injection via unsanitized input
2. HIGH | Testing | No tests for new /api/reset endpoint
Recommendations (should fix):
3. MEDIUM | Performance | users.ts:88 | N+1 query in loop
4. MEDIUM | Circular Test | auth.test.ts | Tests mirror implementation
Suggestions (nice to have):
Extract validation logic to shared utility (auth.ts:30-45)
Reviewer Guidance: Focus on auth.ts (security) and test coverage gap.
Suggested reviewer: @senior-krakow (top expertise for this module)
The limits are hard-coded so VCR can't flood your PR:
- Max 5 quick findings at the summary level (Layer 2)
- Max 15 inline comments on specific lines (Layer 3, deep review)
- If nothing important is found, you see this and nothing else:
✅ Silence is the default
Most PRs are fine. VCR is designed to stay quiet when there's nothing worth saying. If you're not hearing from it, that's working as intended.
How the risk levels work
Every PR gets a risk level. This determines how deep VCR looks and how much of your (and your reviewer's) time it asks for.
| Risk | What triggers it | What happens |
|---|---|---|
| LOW | Small change, safe paths, tests pass | Fast scan only, done in ~2 min. No deep review. |
| MEDIUM | Sensitive path or coverage drop | Deep review kicks in. More thorough analysis. |
| HIGH | Critical path (auth, payments, infra) | Full multi-lens analysis. ~10 min. |
| CRITICAL | All the above + AI-generated code on a critical path | Full analysis + your senior gets pinged directly. |
The mental model is simple: if you touch auth, expect CRITICAL. If you touch docs, expect LOW. That's it.
What VCR catches that you might miss
These aren't theoretical. These are the patterns that slip through every day in teams that ship AI-assisted code.
Circular tests
Your test passes, but it's testing that the code does what it does, not what it should do. Copilot generated both the code and the test, so the test is just a mirror of the implementation. VCR flags this pattern explicitly: "This test verifies implementation, not specification."
Hallucinated APIs
Copilot used crypto.subtle.timingSafeEqual but you're on Node 18 where that API
doesn't exist in that form. The code looks right. The types might even check out. But it'll crash
at runtime. VCR's AI-Code Safety lens catches these because it checks what actually exists in your
runtime environment.
Convention drift
You used a Factory pattern, but this module uses direct instantiation everywhere else. It's not wrong, but it'll confuse the next person (and the next agent) who works here. VCR flags it so you can decide. Maybe the Factory is better and the module should migrate, or maybe you should match the existing style.
Over-engineering
Three wrapper classes for one function? AI code generators love building abstractions nobody asked for. VCR spots unnecessary complexity and tells you: "This could be a single function call."
💡 It's not a linter
VCR doesn't care about your semicolons, import order, or variable names. That's what ESLint, Prettier, and your IDE are for. VCR looks at the stuff that actually causes incidents.
How to give feedback
VCR learns from your reactions. Every finding has reaction buttons on the PR comment. Use them. It takes one click and directly shapes what VCR comments on next time.
| Reaction | Meaning | What happens |
|---|---|---|
| 👍 | Helpful, you fixed it | VCR gains confidence in this category for your codebase |
| 👎 | False positive, not relevant | VCR reduces weight for this pattern in your context |
| 🤔 | Not sure, needs discussion | Flagged for team review, helps calibrate edge cases |
✅ Your feedback matters
Thumbs-down a bad finding and VCR learns. The more your team reacts, the fewer false positives you'll see over time. This is how VCR avoids becoming another bot you ignore.
What VCR does NOT do
Let's be clear about the boundaries so you don't have wrong expectations.
- Won't auto-fix your code. v1 reports findings. You decide what to do. (Auto-fix is planned for v2.)
- Won't replace human review. Your senior still approves the PR. VCR tells them where to look, not what to decide.
- Won't block your PR unless your team explicitly configures it to. By default, VCR advises.
- Won't comment on formatting, naming, or import order. That's what your linter is for. VCR focuses on things that cause actual problems.
For senior developers
If you're the person who approves PRs and sets standards for your team, here's what changes for you specifically.
You'll review fewer PRs, but with better context. VCR pre-annotates every PR with risk level, specific findings, and exactly which files need your attention. Instead of reading every line of a 400-line diff, you focus on the 3 files that matter.
VCR tells you WHERE to look, not what to decide. It's a guide, not a replacement. "Focus your review on auth.ts (security findings) and test coverage gap." That's the kind of guidance you get.
Customization you'll care about
- Custom Review Lenses: add domain-specific review categories for your modules. If your team has specific patterns for database migrations or event sourcing, you can teach VCR to check for them.
- Conventions file: add your team's standards to
.vcr/conventions.mdand VCR reads them. "We use direct instantiation, not Factory pattern." That kind of thing. VCR will flag deviations.
Full details on lenses and configuration: Configuration Reference
What developers are saying
VCR taught me about circular tests. I didn't know my tests were just mirrors of my implementation. They passed, so I thought they were fine. Now I actually write tests that check behavior, not just that the code runs.
I review 40% fewer PRs but catch more real issues. The AI-Code Safety lens caught a hallucinated API last week that I would have missed. It looked completely correct at first glance. VCR told me exactly which line to look at and why.