VCR defines required capabilities, not required tools. The following reference implementations are provided by VirtusLab for pilot deployments. Clients may substitute with equivalent tooling that satisfies the same capability contract.
Component Reference
| Component | Required Capability | Reference Implementation | Alternatives |
|---|---|---|---|
| Repository knowledge layer | Pre-indexed ownership, dependencies, history, expertise | ViDIA (VirtusLab, MIT) | Sourcegraph code intelligence, custom DuckDB/SQLite over git log, GitHub CODEOWNERS + scripts |
| CI infrastructure | Sub-2-minute feedback loops for agent iteration | Visdom Machine-Speed CI | Bazel + EngFlow, Nx, Turborepo, Gradle remote cache |
| SAST | Static security analysis | Semgrep (open source) | CodeQL, SonarQube, Snyk Code |
| Secret scanning | Detect leaked credentials | gitleaks (open source) | truffleHog, GitHub secret scanning |
| AI provider | LLM API access | Anthropic (Claude Haiku/Sonnet/Opus) | OpenAI GPT-4o, Azure OpenAI, Google Gemini |
| CI/CD platform | Workflow execution | GitHub Actions | GitLab CI, Azure Pipelines, Jenkins |
| Code hosting | PR integration | GitHub | GitLab, Bitbucket, Azure DevOps |
| Governance / audit trail | Change audit, auto-evaluation, compliance | TraceVault (VirtusLab) | Custom audit logging, SBOM tooling |
VirtusLab Reference Implementations
📦 ViDIA: Repository Knowledge Layer
ViDIA (VirtusLab, MIT license) provides DuckDB analytics over git history, dependency graphs, and PR discussions, served as MCP tools or CLI. Pinned by SHA256, reusable across sessions. This is the primary data source for Layer 0 context collection.
📦 Visdom Machine-Speed CI
Remote caching, incremental builds, and test impact analysis designed for sub-2-minute feedback loops. Enables agent-driven development where each iteration gets fast CI signal, keeping ITS low and CPI manageable.
📦 TraceVault: Governance and Audit Trail
TraceVault provides the governance layer for AI-assisted development: auto-evaluation, audit trail, and EU AI Act 2026 compliance. VCR findings feed into TraceVault as part of the change audit record. Together they answer: "should this code be in production?" and "can we prove it?"
📦 VCR GitHub Action
virtuslab/vcr-action@v1, the reference CI integration. Handles checkout, context collection,
layer execution, and reporter output in a single GitHub Actions step. See
Configuration for the full workflow YAML.
Out of Scope (v1)
The following capabilities are intentionally excluded from v1 to maintain focus and deliverability:
- Auto-fix: VCR v1 reports findings. It does not auto-fix code. Planned for v2: agent applies suggested fixes, VCR re-reviews.
- Fine-tuning: v1 uses off-the-shelf models with prompt engineering. Fine-tuning on client data is a v2 consideration.
- Multi-repo: v1 targets single-repo setup. Monorepo and multi-repo orchestration is v2.
- GitLab/Azure DevOps: v1 reference implementation is GitHub-only. Process is portable; adapters for other platforms follow.
- Self-hosted LLM: v1 uses cloud API providers. Self-hosted/air-gapped deployment is an enterprise v2 feature.
Open Questions
⚠️ Active Discussion
These questions are under active discussion for the v1 release.
- AI-generated flag source: How to reliably detect AI-generated code? PR labels? Git metadata? Heuristic detection? Need to define the primary signal.
- Feedback loop automation: At what maturity level do developer reactions automatically adjust prompts vs manual tuning by VirtusLab?
- TORS bootstrap: New clients have no test reliability history. What's the cold-start strategy? Run 30 days of data collection before enabling TORS filtering?
- Reviewer assignment: Should VCR assign reviewers based on expertise data, or only suggest? Organizational politics may make auto-assignment problematic.
- PR blocking policy: Should VCR ever block a PR merge (via GitHub Check), or only advise? Who decides the policy, VCR config or org-level setting?