Coding

Show HN: adamsreview – better multi-agent PR reviews for Claude Code

I built adamsreview, a Claude Code plugin that runs deeper, multi-stage PR reviews using parallel sub-agents, validation passes, persistent JSON state, and optional ensemble review via Codex CLI and PR bot comments. On my own PRs, it has been catching dramatically more real bugs than Claude’s built-in /review, /ultrareview, CodeRabbit, Greptile, and Codex’s built-in review, while producing fewer false positives. adamsreview is six Claude Code slash commands packaged as a plugin: review, codex-review, add, promote, walkthrough, and fix. I modeled it after the built-in /review command and extended it meaningfully. You can clear context between review stages because state is stored in JSON artifacts on disk, with built-in scripts for keeping it updated. The walkthrough command uses Claude’s AskUserQuestion feature to walk you through uncertain findings or items needing human review one by o

Anthropic’s Claude Code now supports adamsreview, a third-party plugin that replaces the built-in /review command with a six-stage pipeline for deeper, multi-agent pull-request (PR) reviews. The tool uses parallel sub-agents, persistent JSON state, and automated fix loops to catch more bugs while reducing false positives compared to Claude’s native /review, /ultrareview, CodeRabbit, Greptile, and Codex’s built-in review tools. It runs on a standard Claude Code Max subscription and does not consume Extra Usage tokens like /ultrareview.

Overview

adamsreview is a Claude Code plugin that extends the built-in /review command into a structured workflow with six slash commands:

  1. /adamsreview:review – Multi-lens code review of a branch or PR. Up to seven parallel sub-agents (correctness, security, UX, etc.) feed into a dedup pass, validation gates, and an optional cross-cutting Opus pass. High-confidence fixes are pre-computed for batch acceptance. The --ensemble flag adds a Codex CLI pass and PR bot-comment scrape.
  2. /adamsreview:codex-review – A Codex CLI-driven peer review with tunable effort (low|medium|high|xhigh). Outputs the same artifact format as :review, making it drop-in compatible with downstream commands.
  3. /adamsreview:add – Injects external findings (e.g., from a cloud /ultrareview, Opus session, or teammate notes) into the current review artifact. Findings are deduped, validated, and republished to the PR comment.
  4. /adamsreview:walkthrough – Interactive session for findings that :fix would skip. Uses Claude’s AskUserQuestion UI to step through uncertain or human-judgment items, with batch acceptance for pre-computed fixes.
  5. /adamsreview:fix – Automated fix loop. Dispatches sub-agents in parallel, re-reviews the changes with Opus, reverts regressions, and commits the survivors (one combined commit by default; --granular-commits for per-group commits).
  6. /adamsreview:promote – Overrides the auto-fix eligibility filter for a single finding, bypassing score thresholds.

Review state persists in JSON artifacts under ~/.adams-reviews/<repo-slug>/<branch>/, allowing steps to be run days or weeks apart. The plugin auto-adds its bin/ directory to $PATH, eliminating the need for symlinks or manual installs.

How it works

Parallel sub-agents and validation

The :review command dispatches up to seven specialized sub-agents (e.g., correctness, security, UX) in parallel. Findings are deduped, then validated through a

Similar Articles

More articles like this

Coding 1 min

Visual Studio Code 1.120

Visual Studio Code’s 1.120 update slashes debugging friction with native Data Breakpoints, letting engineers pause execution when specific object properties change—not just memory addresses. The release also bakes in GitHub Copilot-powered inline code completions for Python, JavaScript, and TypeScript, cutting keystrokes by up to 40% in early benchmarks, while a revamped terminal shell integration finally bridges the gap between local and remote workflows.

Coding 1 min

7 lines of code, 3 minutes: Implement a programming language (2010)

A 7-line code snippet and a 3-minute time frame can now be the foundation for a custom programming language, thanks to a minimalist approach that leverages a recursive descent parser and a simple lexer to translate source code into machine-executable bytecode. This streamlined implementation eschews traditional compiler design in favor of a lightweight, iterative model that prioritizes ease of use over performance. The result is a remarkably concise yet functional language framework.

Coding 2 min

Make America AI Ready: Strengths, Weaknesses, and Recommendations

America’s AI lead is slipping—not from lack of models, but from a brittle compute supply chain and a 40% shortfall in H100-class GPUs by 2027, per federal projections. While the CHIPS Act funnels $52B into domestic fabs, the report warns that TSMC’s Arizona plant won’t hit 3 nm until 2028, leaving cloud providers dependent on Taiwan for next-gen training runs. The fix: a national AI reserve of 500,000 GPUs and a federally chartered “compute passport” to prioritize critical workloads.

Coding 2 min

AI Productivity Fails

"Despite Promising Early Gains, AI-Driven Productivity Tools Stagnate at 12% Adoption Rate, Leaving Millions of Users Stranded in Manual Workflows, as Research Reveals Critical Bottlenecks in Integration and Data Quality."

Coding 1 min

You Need AI That Reduces Maintenance Costs

Maintenance costs for large-scale AI systems are skyrocketing, driven by the exponential growth of complex model sizes and the labor-intensive process of fine-tuning and debugging. A new wave of AI frameworks is emerging that leverages techniques like model distillation and knowledge graph pruning to reduce the computational overhead and human effort required to maintain these systems. By shrinking the "model footprint," these innovations promise to cut costs by up to 70% and unlock AI adoption in resource-constrained industries.

Coding 1 min

PS3 Emulator Devs Politely Ask That People Stop Flooding It with AI PRs

A surge of AI-generated pull requests overwhelms a PlayStation 3 emulator project, prompting developers to politely request that contributors verify the authenticity of their submissions, citing concerns over malicious code and the emulator's stability. The influx of automated contributions, often submitted in bulk, has strained the project's review process and raised questions about the role of AI in open-source development.