Unique PRs Reviewed / Week

Last updated: January 7, 2026

Overview

The Unique PRs Reviewed / Week report measures the number of unique pull requests reviewed per week per active contributor. This metric helps teams understand code review participation, workload distribution, and engagement across the team.

What Does This Metric Measure?

Definition

Unique PRs Reviewed / Week counts the distinct pull requests that each person reviews during their active working weeks.

Key Characteristics:

  • Unique count: Each PR is counted only once per reviewer, even if reviewed multiple times

  • Per active contributor: Normalized per person for fair comparison

  • Adjusted for time off: Accounts for out-of-office days to provide accurate averages

  • Weekly basis: Results expressed as PRs per week

What It Tells You

This metric indicates:

  •  Review participation breadth — How many different PRs someone engages with

  •  Workload distribution — Whether review load is balanced across the team

  •  Code review culture — Level of peer review engagement

  •  Knowledge sharing — Breadth of codebase exposure through reviews

What It Doesn't Tell You

This metric does NOT measure:

  •  Quality of reviews — Use "Review Depth" or "Comments Authored" for quality

  •  Speed of reviews — Use "Review Response Time" for speed

  •  Approval/rejection rate — Use "PR Approval Rate" for outcomes

  •  Time spent reviewing — Duration not captured

How It's Calculated

Formula

Unique PRs Reviewed / Week = (Total Unique PRs Reviewed / Total Active Coding Days) × 7

Components Breakdown

1. Numerator: Total Unique PRs Reviewed

  • Counts DISTINCT pull requests reviewed by the contributor

  • Each PR counted once, regardless of how many times reviewed

  • Includes all review types (approval, changes requested, comments)

2. Denominator: Total Active Coding Days

  • A person is "active" on a day if:

    • They had VCS activity in the last 30 days

    • They were NOT marked as out of office

  • Only these days count in the calculation

3. Weekly Scaling (× 7)

  • Converts daily average to weekly metric

Example Calculation

Scenario:

  • Person reviewed 12 unique PRs over 2 weeks

  • They worked 8 days (took 2 days OOO)

Calculation:

(12 unique PRs / 8 active days) × 7 = 10.5 PRs reviewed per week

Data Filtering

Included:

  • All reviews on active repositories

  • Reviews from active contributors (committed in last 30 days)

  • All review types (approval, comments, changes requested)

Excluded:

  • Self-reviews (reviewing your own PR)

  • Reviews from inactive contributors

  • Repositories marked as ignored in settings

  • Branches marked as ignored

  • Days marked as OOO

Accessing the Report

Navigation Path

Primary Route:

  1. Login to Span

  2. Under "Insights" click on "Productivity" in main navigation

  3. Select "Code Review" tab

  4. Find "Unique PRs Reviewed / Week" metric

Alternative Access:

  • Team Pages: View each team member's review activity

  • Individual Pages: See person-specific review metrics

  • Metric Explorer: Search for "unique PRs reviewed"

Related Metrics Displayed

Often shown alongside:

  • Total PR Reviews / Week: Total review count (includes repeat reviews)

  • Review Response Time: How quickly reviews are started

  • Comments Authored / Week: Overall review engagement

  • Review Depth: Average comments per review

Available Filters

Dimension Filters

Filter

Purpose

Person

View specific individual's review activity

Team/Group

Aggregate by organizational unit

Repository

Filter to specific codebases

Integration Platform

Separate GitHub, GitLab, Azure DevOps

IC Level

Compare by career level

Job Family

Compare across functions (Engineering, QA, etc.)

Location

Geographic analysis

Time Filters

  • Last 7 days

  • Last 2 weeks (default)

  • Last 4 weeks

  • Last 3 months

  • Last 6 months

  • Custom date range

Combined Filtering

Stack filters for targeted analysis:

  • Example: "Show Backend team's review activity for the last month, excluding OOO days"

Interpreting the Data

Benchmark Ranges

PRs/Week

Assessment

P

4-8

Strong

Active reviewer, good engagement

2-4

Healthy

Typical for most teams

1-2

Moderate

May indicate specialist role or focus on implementation

< 1

Low

Consider follow-up; potential bottleneck or disengagement

> 10

Very High

Senior/platform roles or small team; check for overload

Percentile Interpretation

Percentile

Interpretation

Top 10% (90+)

Exceptional reviewing breadth; likely senior reviewer or reviewer in small team

50-75%

Above average participation

40-60%

Average/typical for the team

20-40%

Below average; may be early career or specialized role

Bottom 20% (<20)

Limited review participation; investigate if appropriate for role

Context Matters

Role-Based Expectations:

Senior Engineers / Tech Leads

  • Expected: 5-10+ PRs/week

  • Reason: Architectural oversight, mentoring responsibilities

Mid-Level Engineers

  • Expected: 3-6 PRs/week

  • Reason: Balance between implementation and review

Junior Engineers

  • Expected: 1-4 PRs/week

  • Reason: Learning phase; gradually increasing review load

Platform/Infrastructure Teams

  • Expected: 8-15+ PRs/week

  • Reason: Broader review scope across multiple projects

Specialist Roles

  • Expected: 1-3 PRs/week

  • Reason: Narrow domain focus

Week-to-Week Variability

Normal Fluctuations:

  • Sprint cycles (higher at sprint start, lower at end)

  • Release periods (often lower during code freeze)

  • Focus time (developers may batch reviews or implementation work)

  • On-call rotations (may reduce review capacity)

Consider 4-week rolling averages for trend analysis rather than single-week snapshots.

Use Cases

1. Code Review Load Balancing

Scenario: Some reviewers handling 10+ PRs/week, others < 2 PRs/week

Action:

  • Redistribute review assignments

  • Set up round-robin review assignment

  • Identify bottlenecks in review process

  • Ensure junior engineers are included in reviews

2. Onboarding Tracking

Scenario: Track new hire's gradual increase in review participation

Timeline:

  • Week 1-2: 0-1 PRs/week (learning phase)

  • Week 3-4: 1-2 PRs/week (starting to review)

  • Week 5-8: 2-4 PRs/week (ramping up)

  • Week 9+: 3-6 PRs/week (fully engaged)

Action: If progression is slower than expected, provide mentoring support

3. Team Capacity Planning

Scenario: Team creating 30 PRs/week but only 5 active reviewers

Analysis:

  • 30 PRs / 5 reviewers = 6 PRs per person needed

  • If current average is 3 PRs/week → capacity shortfall

  • Action: Hire more reviewers, reduce PR creation rate, or improve review efficiency

4. Knowledge Distribution

Scenario: Only 2 people reviewing infrastructure PRs

Risk: Knowledge silos, single point of failure

Action:

  • Rotate review assignments across more team members

  • Pair reviewers (experienced + learning)

  • Document tribal knowledge

5. Workload Assessment

Scenario: Senior engineer showing 15+ PRs/week consistently

Analysis: Potential reviewer overload

Questions:

  • Are they blocking PR merges?

  • What's their review response time?

  • Are they also implementing features?

  • Should review load be redistributed?

6. Cultural Health Check

Scenario: Team-wide average dropping from 4 to 2 PRs/week over 3 months

Possible Causes:

  • Decreased PR creation (fewer features)

  • Process changes (less peer review emphasis)

  • Team burnout or disengagement

  • Shift to pair programming

Action: Investigate root cause and address

Important Considerations & Limitations

Metric Characteristics

1. "Unique" ≠ "Thorough"

  • High count doesn't indicate deep, quality reviews

  • Recommendation: Combine with "Review Depth" or "Comments Authored / Week"

2. OOO Normalization

  • Automatically adjusts for vacation/time off

  • Only active working days count in denominator

  • Impact: Fair comparison across team members with different schedules

3. Active Contributor Definition

  • Must have commit activity within last 30 days

  • Impact: Managers or non-coding roles may not appear

  • Note: Intentional design for engineering metrics

4. Self-Reviews Excluded

  • Reviews of your own PRs don't count

  • Impact: Focuses on peer review activity only

5. Partial Week Handling

  • If someone returns mid-week from OOO, only active days count

  • Impact: Weekly values may represent fewer than 7 actual days

  • Result: More accurate per-working-day representation

Interpretation Cautions

 Don't use in isolation for performance evaluation

  • Single metric doesn't capture full contribution picture

  • Combine with quality, speed, and outcome metrics

 Context is critical

  • Role, team size, and project phase affect expectations

  • Compare like-to-like (same role/level)

 Trend matters more than absolute values

  • Look for consistent patterns over time

  • One-week spikes/dips are often noise

Related Metrics for Complete Picture

View these alongside "Unique PRs Reviewed / Week":

Metric

What it adds

Interpretation Together

Total PR Reviews / Week

Total review count (includes repeats)

High total but low unique = re-reviewing same PRs

Review Response Time

How fast reviews start

High unique + fast response = efficient reviewer

Comments Authored / Week

Review engagement depth

High unique + low comments = surface-level reviews

Review Depth

Average comments per review

High unique + high depth = thorough, engaged reviewer

PR Cycle Time

Overall flow speed

High reviews + low cycle time = reviews aren't blocking

% PRs Without Approval

Quality gate health

Low review count + high no-approval % = risky

Troubleshooting

If Metric Is Unexpectedly Low

Check:

  • ✓ Contributor is not marked OOO

  • ✓ Person has commits in last 30 days (required for "active" status)

  • ✓ VCS integration is properly connected

  • ✓ Reviews aren't on ignored repositories or branches

If Metric Shows Zero

Possible Causes:

  • No reviews submitted in the period

  • Doesn't meet "active contributor" criteria (no commits in 30 days)

  • All reviews were self-reviews (excluded by design)

  • Repository filtering excludes all reviewed repos

If Values Don't Match Expectations

Verify:

  • Self-reviews are expected to be excluded (by design)

  • OOO time is correctly recorded in calendar/system

  • Date range selection is correct

  • Active days calculation includes only working days

Best Practices

For Individual Contributors

Aim for:

  • Consistent weekly participation (not feast-or-famine)

  • Balance between reviewing and implementing

  • Gradual increase as you gain experience

Tips:

  • Set aside dedicated review time daily

  • Review smaller PRs first to increase velocity

  • Provide thorough feedback to reduce re-reviews

For Team Leads

Monitor for:

  • Balanced distribution across team

  • Appropriate ramp-up for new hires

  • Overloaded reviewers (> 10 PRs/week consistently)

  • Underutilized reviewers (< 2 PRs/week)

Actions:

  • Implement round-robin assignment

  • Set review expectations by role/level

  • Recognize top reviewers

  • Provide review training

For Engineering Managers

Use for:

  • Capacity planning discussions

  • Workload balancing decisions

  • Onboarding progress tracking

  • Team health assessment

Avoid:

  • Using as sole performance indicator

  • Comparing across vastly different roles

  • Setting arbitrary quotas without context

Quick Reference Checklist

When analyzing Unique PRs Reviewed / Week:

  •  What's the team average? Individual range?

  •  Are values appropriate for each person's role/level?

  •  Is review load balanced across the team?

  •  Are new hires progressing in review participation?

  •  Are any reviewers overloaded (potential bottleneck)?

  •  Are any reviewers under-engaged (coaching opportunity)?

  •  How does this correlate with review quality metrics?

  •  What's the trend over time (improving or declining)?

  •  Does capacity match PR creation rate?

Summary

The Unique PRs Reviewed / Week report provides visibility into code review participation and workload distribution across your team. Use it to ensure balanced review loads, track onboarding progress, identify bottlenecks, and maintain a healthy code review culture. Remember: this metric measures breadth of participation, not quality or thoroughness—always view it alongside related metrics for the complete picture.

Healthy Target: 2-6 PRs/week for most mid-level engineers, adjusted for role and team context.