Review Capacity Report
Last updated: March 30, 2026
Overview
The Review Capacity Report helps engineering leaders understand how pull request (PR) review work is distributed across teams and individuals.
As AI tools accelerate code generation, many organizations are experiencing a new bottleneck in the code review process. AI can produce more—and often larger—PRs at a much faster pace. While automation and review bots help, human reviewers still carry the responsibility of validating code quality and correctness.
This report focuses on the human side of the review process: how review work is actually distributed across your organization and whether that distribution is sustainable.By surfacing imbalances, cross-team dependencies, and concentration risk, the report helps you proactively address review bottlenecks before they slow down delivery.
Where to find this report: Insights → Quality → Review Capacity
Key Questions This Report Answers
Is review work spread evenly across my team, or concentrated among a few reviewers?
How much of my team’s review capacity is spent reviewing other teams’ PRs?
Which team pairings create the most cross-team review demand?
If a key reviewer becomes unavailable, how much would review throughput be impacted?
Are review workloads scaling sustainably as development velocity increases?
Filters and Controls
At the top of the report, you can configure the scope of analysis.
Controle | Description |
Date Range | Select any custom time window to analyze review activity. |
Team / Group Filter | Scope the report to a specific team, group, or the entire organization. |
Group teams by | Choose the hierarchy level to aggregate teams (for example Squads, Level 1, or Level 2). Defaults to Squads. |
Metric Definitions
Review Weight/team: New Span synthetic metric that determines that amount of effort required to review a PR by team member. This takes into account the depth involved in a person’s reviewing of a PR. It considers all the reviews they had to do over a PR and weighs in comments weighted by actionability (nitpick vs not), and the PR’s complexity.
Total Review Weight: Sum of PR Review Weights for a given org, team, engineer over the filter timeframe
Load Balance: Reflects review work distributed across reviewers within a team. 100 means that the distribution of review is perfect across team members.
Color code:
0-25: Red
25-50: Yellow
50+: Green
Intra Team %: percent of PR Weight a team reviews coming from their team
Cross Team %: percent of PR Weight a team reviews coming from external team reviews
Note on Members who are in multiple teams
Multi-group membership affects review totals and ratios. Two examples:
Deflation: Reviewer X in A+B reviews author Y in A+C. Only the A→A pair is classified as intra-team; the B→C pair is dropped — missing from team B's totals and X's totals within team B.
Inflation: Reviewer X in A+B reviews author Y in C+D. Four pairs are generated (A→C, A→D, B→C, B→D), each counting the full review weight. So the same review is counted 4× across teams instead of once.
Net effect: a review successfully classified as intra has a lower impact on all numbers since it produces fewer counted pairs, while cross-team reviews between multi-group members get amplified.
Common Use Cases
Identifying Review Bottlenecks
Low Load Balance Scores indicate that review work is concentrated on a small number of reviewers. Use the Team Detail Drawer to see which engineers carry the majority of the load.
Tracking Improvements Over Time
Use the Load Balance trend chart to monitor whether changes—such as rotating reviewers or expanding CODEOWNERS coverage—are improving distribution.