Total PRs Merged

Last updated: February 3, 2026

Overview

The Total PRs Merged report measures the volume of code changes successfully integrated into your codebase. This fundamental delivery metric helps you track development throughput, understand team productivity patterns, and establish baseline capacity for planning future work.

What This Metric Measures

Total PRs Merged tracks:

  • Delivery volume: The number of pull requests successfully merged into your codebase

  • Development throughput: How much code change activity your team completes

  • Contribution patterns: Who is merging code and how frequently

  • Team capacity: Baseline understanding of what your team can deliver

This metric provides a straightforward count of completed code changes, serving as a foundation for understanding development velocity and establishing capacity baselines.

What is a "Merged" PR?

merged PR is a pull request where:

  • ✓ Code changes have been successfully integrated into the target branch

  • ✓ The PR has reached the "merged" status in your version control system

  • ✓ The author is classified as a development contributor in Span

PR Status Definitions

Span tracks pull requests in four distinct states:

PR Status

Definition

Counted in Total PRs Merged?

Merged

Successfully integrated into base branch

 YES

Open

Under review, not yet integrated

NO

Closed

Closed without being integrated

NO

Reverted

Merged then subsequently undone

Tracked separately

Important: When a merged PR is later reverted, it changes to "reverted" status. Span tracks these separately in the "Total Reverted PRs" metric to help you monitor code quality issues.

How It's Calculated

Formula:

Total PRs Merged = COUNT(distinct pull requests) WHERE:
  - PR status = "merged"
  - PR author = development contributor
  - Merged date within selected time period

This is a straightforward count with:

  • No weighting by PR size or complexity

  • No quality adjustments

  • No time-based normalization (see variants below for normalized versions)

Example Calculation

Over a 2-week sprint, your team:

  • Merged 45 PRs

  • Closed 3 PRs without merging

  • Has 12 PRs still open

Total PRs Merged: 45

(The closed and open PRs are excluded from the count)

What This Metric Does NOT Measure

It's important to understand the limitations:

  •  Quality of merged changes

  •  Size or complexity of changes

  •  Time taken to merge (that's PR Cycle Time)

  •  Review thoroughness (see % PRs merged without review)

  •  Test coverage (see % PRs merged with tests)

  •  Code stability (see PR Revert Rate)

Pro Tip: Use Total PRs Merged alongside quality and efficiency metrics for a complete picture of your team's delivery performance.

Where to Find This Report

Access the Total PRs Merged report from:

  • Productivity →  Velocity section → Delivery Tab

  • Team dashboards → Velocity metrics

  • Individual contributor views → Personal metrics

  • Onboarding dashboards → New hire activity tracking

  • AI Impact reports → AI-assisted code delivery

Search for "PRs merged" or "Total PRs merged" in the metrics navigation.

Metric Variants

Span provides several related metrics to support different analysis needs:

1. Total PRs Merged (Base Metric)

The raw count of merged PRs in the selected time period.

Use for: Sprint totals, absolute volume tracking, stakeholder reporting

2. PRs Merged Per Week (Rate-Normalized)

Normalizes merged PRs by active work weeks and contributors, accounting for time off and team size.

FormulaTotal PRs Merged ÷ Active Work Weeks per Contributor

Use for: Fair cross-period comparison, trend analysis, benchmarking different teams

3. Total PRs Merged on Active Day

Counts only PRs merged during a contributor's active work days, excluding out-of-office periods.

Use for: Understanding productivity during actual working time

Pro Tip: Use the PRs Merged Per Week variant for comparing teams of different sizes or time periods with varying vacation patterns. Use Total PRs Merged for reporting absolute delivery numbers to stakeholders.

Available Breakdowns & Filters

Analyze Total PRs Merged across multiple dimensions:

Team & People Dimensions

  • Individual contributors (PR authors)

  • Teams and organizational groups

  • Department or job family

  • IC level or seniority

  • Job title or role

  • Location/geography

Project Dimensions

  • Repositories

  • Repository groups

  • Project assignments

  • Code integration platform (GitHub, GitLab, Azure DevOps)

Time Periods

  • Daily: Day-by-day merge activity

  • Weekly: Weekly merge patterns

  • Monthly: Month-by-month trends

  • Quarterly: Longer-term delivery patterns

  • Custom ranges: Any date range you specify

Standard Time Ranges

  • Last 7 days

  • Last 2 weeks

  • Last 4 weeks

  • Last 3 months

  • Last 6 months

  • Last 12 months

Key Use Cases

1. Establish Delivery Baseline

Calculate your team's typical PR merge volume to set realistic expectations and commitments.

Example: "Our team averages 40 merged PRs per 2-week sprint. We can reliably commit to 35-45 PRs depending on complexity."

2. Track Productivity Trends

Monitor whether delivery volume is increasing, decreasing, or stable over time.

Example: "Q2 showed 15% more merged PRs than Q1, indicating improved team efficiency."

3. Compare Team Performance

Benchmark delivery volume across different teams (combine with normalized metrics for fair comparison).

Example: "Team A merged 120 PRs with 10 people (12 per person) while Team B merged 60 PRs with 4 people (15 per person)."

4. Monitor Onboarding Progress

Track new hire integration by measuring when they start merging code regularly.

Example: "New hires typically merge their first PR within 2 weeks and reach team average merge rates by week 8."

5. Assess AI Tool Impact

Correlate AI coding assistant usage with changes in PR merge volume.

Example: "Teams using GitHub Copilot show 20% higher PR merge volume than non-AI teams."

6. Capacity Planning

Use historical merge data to forecast realistic delivery timelines for projects.

Example: "At 40 PRs per sprint and 200 PRs remaining, we need approximately 5 sprints to complete the feature."

7. Identify Bottlenecks

Low merge volume may indicate blockers, process issues, or capacity constraints.

Example: "PR merges dropped 30% this sprint—investigation revealed CI/CD infrastructure issues causing delays."

How It Relates to Other Metrics

Total PRs Merged serves as both a standalone metric and a key input for many composite metrics:

Metrics Using Total PRs Merged as a Denominator

These metrics divide by Total PRs Merged to calculate rates and percentages:

Metric

Formula

What It Measures

% PRs Merged Without Review

PRs without approval ÷ Total PRs merged

Code review compliance

% PRs Merged With Tests

PRs with test code ÷ Total PRs merged

Test coverage in changes

PR Link Rate

PRs linked to issues ÷ Total PRs merged

Work traceability

Comments Received Per PR

Total comments ÷ Total PRs merged

Review engagement depth

PR Revert Rate

Reverted PRs ÷ Total PRs merged

Merge quality/stability

Related Independent Metrics

These metrics provide complementary views of delivery performance:

Metric

Relationship

Use Together To...

PR Cycle Time

Speed vs. volume

Understand both throughput AND efficiency

PR Review Cycles

Review iterations

Balance speed with quality

Lines Changed

Size of changes

Context for PR count (many small vs. few large)

Velocity (Weighted PRs)

Complexity-adjusted volume

Account for PR significance

Active Contributors

Team size context

Normalize by team capacity

Powerful Analysis Combinations:

Total PRs Merged (volume) + PR Cycle Time (speed) + Review Quality (thoroughness) 
= Complete delivery picture
Total PRs Merged + PR Revert Rate + % PRs with Tests
= Quality-adjusted delivery view

Insights You Can Gain

Delivery Velocity Patterns

  • What's our typical merge volume? Establish baseline for planning

  • Are we speeding up or slowing down? Track velocity changes

  • When do we merge most code? Identify peak productivity periods

  • Are there seasonal patterns? Account for holiday/vacation impacts

Team Productivity Analysis

  • Who are the most active contributors? Recognize high performers

  • How is work distributed? Ensure balanced workload

  • Are new hires ramping up? Track onboarding progress

  • Do certain teams consistently deliver more? Identify best practices

Process Health Indicators

  • High volume + low cycle time = Efficient process ✓

  • High volume + high revert rate = Quality concerns

  • Low volume + high cycle time = Bottleneck detected

  • Declining volume over time = Capacity or engagement issue

Capacity Planning Insights

  • Average merge rate = Baseline for sprint planning

  • Velocity variance = Confidence level for commitments

  • Peak vs. typical rates = Sustainable vs. crunch-mode capacity

Quality Correlation

  • Merge volume vs. revert rate: Are we sacrificing quality for speed?

  • Merge volume vs. test coverage: Are we maintaining testing discipline?

  • Merge volume vs. review depth: Are reviews becoming superficial?

Common Scenarios & Interpretations

Scenario 1: Declining PR Merges

What you see: Merge volume dropping over time

Possible causes:

  • Increased PR complexity (fewer but larger changes)

  • Process bottlenecks (reviews taking longer)

  • Team capacity reduced (departures, absences)

  • Shifting to planning/architecture work

  • Technical debt accumulation slowing development

Actions:

  • Check PR Cycle Time for review delays

  • Review team capacity and active contributors

  • Analyze PR size distribution (lines changed)

  • Investigate blockers through retrospectives

  • Compare with story points delivered (maybe volume is down but value is up)

Scenario 2: Increasing PR Merges

What you see: Merge volume rising over time

Possible causes:

  • Team growing in size

  • Improved process efficiency

  • Breaking work into smaller PRs

  • Reduced cycle time (faster reviews)

  • Higher AI tool adoption

Actions:

  • Celebrate improvements if quality metrics remain strong

  • Monitor review thoroughness (comments per PR)

  • Check revert rate to ensure quality isn't suffering

  • Document what's working for other teams

  • Ensure sustainable pace (not crunch mode)

Scenario 3: High Volume + High Revert Rate

What you see: Many PRs merged but many also reverted

Interpretation: Quantity over quality—moving too fast without sufficient review

Actions:

  • Strengthen code review requirements

  • Increase test coverage expectations

  • Slow down to improve quality

  • Review Definition of Done

  • Investigate what's causing reverts

Scenario 4: Low Volume + Low Cycle Time

What you see: Few PRs merged but they're merging quickly

Possible causes:

  • Working on complex, long-running features

  • Team is small

  • Focus on quality over quantity

  • Large monolithic PRs instead of incremental changes

Actions:

  • Check PR size (lines changed)

  • Consider breaking features into smaller PRs

  • Evaluate if low volume aligns with planned work

  • Compare with story points completed

Scenario 5: Uneven Distribution Across Team

What you see: Some developers merge many PRs, others merge few

Possible causes:

  • Work complexity differences (some work on infrastructure vs. features)

  • Skill level variations (senior vs. junior)

  • Pair programming practices (one person merging joint work)

  • Onboarding periods

  • Role differences (some doing more architecture/mentoring)

Actions:

  • Review work assignment patterns

  • Ensure mentoring isn't preventing junior developers from merging

  • Balance complex vs. straightforward work distribution

  • Investigate if low-merging developers need support

Best Practices

1. Establish Your Baseline

Track 4-6 sprints to calculate your team's typical merge volume:

  • Calculate average per sprint

  • Note the range (min to max)

  • Identify and understand outliers

  • Use for future capacity planning

2. Use the Right Variant for Your Purpose

Purpose

Use This Variant

Sprint reporting to stakeholders

Total PRs Merged (absolute count)

Trend analysis over time

PRs Merged Per Week (normalized)

Team size comparison

PRs Merged Per Week (normalized)

Personal productivity during work time

Total PRs Merged on Active Day

3. Always Combine with Quality Metrics

High PR volume means nothing if code quality suffers:

  • Monitor PR Revert Rate (target < 5%)

  • Track % PRs with Tests (target > 80%)

  • Watch % PRs Without Review (target < 10%)

  • Review Comments per PR (ensure meaningful review)

4. Context Matters

Don't compare PR counts without context:

  • 50 small bug fix PRs ≠ 50 feature PRs

  • Team doing infrastructure work ≠ Team doing UI work

  • Microservices architecture ≠ Monolithic architecture

Consider PR size (lines changed), complexity (velocity score), and work type.

5. Focus on Trends, Not Single Points

  • Sprint-to-sprint variations are normal

  • Look for sustained changes over 4+ weeks

  • Account for holidays and vacations

  • Investigate sudden significant changes

6. Break Down Large PRs

If PR count is low but lines changed is high:

  • Encourage smaller, incremental PRs

  • Faster reviews, less risk

  • More frequent integration

  • Better metrics granularity

7. Don't Optimize for PR Count Alone

Avoid gaming the metric:

  • Don't pressure developers to artificially split PRs

  • Don't sacrifice review quality for speed

  • Don't merge incomplete work to hit targets

  • ✓ Focus on sustainable, quality delivery

8. Use for Pattern Recognition, Not Performance Reviews

PR merge count is influenced by many factors outside individual control:

  • Work assignment (complex vs. simple)

  • Review availability

  • Dependencies on other teams

  • Pair programming practices

  • Project phases (planning vs. implementation)

Use for team-level analysis, not individual performance assessment.

Setting Up Your Total PRs Merged Report

Requirements

To use this metric, ensure you have:

  • ✓ Version control system integrated (GitHub, GitLab, Azure DevOps, Bitbucket)

  • ✓ PR data syncing properly

  • ✓ Contributors properly classified in Span

  • ✓ Team/group structure defined

  • ✓ Repository access configured

Configuration Steps

  1. Verify Integration Health

    • Navigate to Settings → Integrations

    • Confirm VCS connection is active

    • Check that PR data is syncing

  2. Configure Contributor Classifications

    • Settings → Contributors

    • Mark developers as "Contributors"

    • Exclude non-coding roles

  3. Set Up Team Structure

    • Define teams and groups

    • Assign developers to teams

    • Validate reporting hierarchies

  4. Establish Baseline

    • Review 4-6 sprints of historical data

    • Calculate average and range

    • Note seasonal patterns

    • Set realistic expectations

  5. Create Monitoring Practices

    • Add to sprint retrospectives

    • Include in leadership dashboards

    • Set up trend alerts (if available)

    • Combine with quality metrics

Frequently Asked Questions

Q: What's a "good" number of PRs merged per sprint?
A: There's no universal benchmark—it depends on team size, PR size, work complexity, and architecture. A team merging 30 small PRs might be equivalent to a team merging 10 large PRs. Focus on your baseline and whether it meets your delivery needs.

Q: Should PR count be higher or lower?
A: Neither is inherently better. Higher counts might indicate:

  • Smaller, incremental changes (often good)

  • Well-distributed work

  • Efficient process

Lower counts might indicate:

  • Larger, more complex changes

  • Infrastructure or architecture work

  • Quality-focused approach

Focus on sustainable delivery that meets quality standards.

Q: Why is Total PRs Merged different from what I see in GitHub/GitLab?
A: Span filters for:

  • Development contributors only (excludes bots, non-engineering roles)

  • Specific time periods

  • Organizational scope (may exclude certain repos)

Check your filters to ensure alignment with your VCS view.

Q: Should PRs be small or large?
A: Generally, smaller PRs are better:

  • Faster to review

  • Less risky to merge

  • Easier to revert if needed

  • More frequent integration

Aim for PRs reviewable in < 30 minutes. Very large PRs (>500 lines) often indicate opportunities to break work into smaller increments.

Q: How do I improve our PR merge volume?
A: Focus on:

  1. Reduce cycle time: Faster reviews = more merges

  2. Break down work: Smaller PRs = more frequent merges

  3. Remove blockers: Identify and eliminate delays

  4. Improve CI/CD: Faster tests = faster merges

  5. Parallel work: Reduce dependencies

  6. Clear requirements: Less back-and-forth

But always maintain quality standards!

Q: Can I compare teams with different tech stacks?
A: Use caution. Different technologies and architectures naturally produce different PR patterns:

  • Frontend teams may have more frequent, smaller PRs

  • Infrastructure teams may have fewer, more complex PRs

  • Microservices may generate more PRs than monoliths

Compare trends within teams rather than absolute counts across teams.

Q: What if my team has low PR count but high story points?
A: This might indicate:

  • Large, complex work requiring fewer PRs

  • High-value features (quality over quantity)

  • Good prioritization

This is often positive! Use story points as the primary velocity metric and PR count as supporting context.

Q: Should I track PR merges by individual developers?
A: Yes for insights, no for performance reviews:

  • ✓ Identify who needs support or unblocking

  • ✓ Recognize contributions

  • ✓ Monitor onboarding progress

  • Don't use as primary performance indicator

  • Don't create competitive dynamics

  • Don't ignore work context differences

Q: How do reverted PRs affect this metric?
A: A PR is counted when initially merged. If later reverted, it changes to "reverted" status and appears in the PR Revert Rate metric. The original merge count remains, but you should monitor revert rate to understand quality issues.


Need Help?

For additional support with the Total PRs Merged report:

  • Visit the Span Help Center

  • Contact your Customer Success Manager

  • Email support@span.app


This documentation reflects Span's platform capabilities as of the current version. Features and calculations are subject to updates.