% of PRs Merged Without Review

Last updated: February 4, 2026

Overview

The % of PRs Merged Without Review (also displayed as "% of PRs merged w/o review") measures the percentage of merged pull requests that did not receive approval before being merged to your main branch. This metric provides critical visibility into code review enforcement and quality control practices.

Formula: (PRs merged without approval ÷ Total PRs merged) × 100

This is classified as a negative metric - lower percentages indicate better code review practices.


What This Report Shows

Primary Metric:

  • % of PRs merged without review - Percentage of merged PRs lacking approval.

Supporting Data:

  • Count of PRs merged without approval

  • Total count of merged PRs

  • Percentile rank comparison to other organizations


How to Access This Report

  • Navigate to Productivity →Quality → Code Review category

  • Locate the "% of PRs merged w/o review" metric card

  1. View:

    • Current percentage value with percentile rank

    • Time-series trend chart

    • Breakdown by person, team, repository, or time period

Related Metrics in Code Review Section:

  • PR comments per active coding week

  • Reviews per active coding week

  • Average PR review depth (Comments Received / PR)

  • Time to complete review

  • PR review cycles


How the Metric is Calculated

What Counts as "Without Review"

A PR is counted as "merged without approval" when:

  • The PR was merged with no approval from another team member

  • Approval status is checked at the time of merge

What Qualifies as an "Approval"

An approval must be:

  • A review submission with "Approved" status

  • Submitted before the PR was merged

  • From a team member (not the PR author, not a bot)

  • From an account not marked as ignored in your organization

What Does NOT Count as Approval

 Comments alone - Discussion without formal approval action
 "Changes Requested" reviews - Indicates more work needed
 "Commented" reviews - Feedback without approval
 Dismissed reviews - Explicitly dismissed before merge
 Reviews after merge - Must occur before merge
 Bot/automation reviews - Excluded from reviewer counts
 Self-reviews - Author cannot approve their own PR

Filters Applied

The metric only counts PRs where:

  • PR author is an active developer in your organization

  • PR status is "Merged"

  • Author is not a bot account


Key Insights from This Metric

Code Review Enforcement

  • High percentages → Weak or inconsistent review requirement enforcement

  • Low percentages → Strong quality gates and process adherence

  • Shows gap between stated policy and actual practice

Quality Control Effectiveness

  • Unreviewed code increases risk of:

    • Bugs and defects

    • Security vulnerabilities

    • Code quality degradation

    • Technical debt accumulation

  • Indicator of code review process maturity

Knowledge Sharing & Team Health

  • Unreviewed merges mean:

    • Reduced knowledge transfer

    • Lower team awareness of changes

    • Potential silos in codebase ownership

  • May reveal bottlenecks or process gaps

Process & Compliance

  • Missing or unenforced branch protection rules

  • Developers bypassing formal review processes

  • Critical for regulated environments (finance, healthcare, etc.)


Interpreting the Data

Understanding the Percentage

Value

Interpretation

Action Needed

0-5%

Excellent - Strong review culture

Maintain current practices

5-15%

Good - Minor gaps in enforcement

Monitor and investigate outliers

15-30%

Concerning - Significant process gaps

Implement stricter controls

30%+

Critical - Major quality risk

Immediate intervention required

Percentile Rankings

  • Higher percentile = Better performance (fewer unreviewed merges)

  • Use to set realistic targets based on comparable organizations

  • Not affected by org size, team size, or coding frequency

Trend Analysis

Trend

Meaning

Action

📈 Upward

Review discipline declining

Investigate what changed

📉 Downward

Practices improving

Continue current approach

📊 Spike

Sudden change

Check for urgency, team changes, or process modifications

🔄 Cyclical

Seasonal patterns

May reveal crunch periods or release cycles


Different "Without Review" Scenarios

Understanding WHY PRs are merged without review helps address root causes:

Scenario

What Happened

Root Cause

Solution

Truly no review

Zero reviews submitted

No review process followed

Enforce branch protection

Comments only

Feedback given but no formal approval

Informal process

Train on formal approval workflow

Review dismissed

Approval removed before merge

Process bypass

Investigate dismissal reasons

Review after merge

Approval came post-merge

Race condition or accident

Enable required approvals setting

Bot/automation

Automated PR not labeled

Configuration issue

Mark automation accounts as bots

Investigation Tips

  • Use breakdown views to identify patterns

  • Examine individual PRs in your VCS platform directly

  • Check branch protection rules in your VCS settings


Breakdown Dimensions

Filter by Person

  • Identify individuals who frequently merge without review

  • Determine if behavior is isolated or widespread

  • Provide targeted coaching

Filter by Team

  • Find teams with weaker enforcement

  • Compare practices across organization

  • Allocate resources for improvement

Filter by Repository

  • Identify critical codebases at higher risk

  • Determine if issue is repository-specific

  • Implement stricter controls for sensitive repos

Filter by Time Period

  • Identify when problems emerged

  • Measure impact of policy/tooling changes

  • Spot cyclical patterns (end-of-quarter rushes)


Related Metrics to Review Together

Code Review Quality

  • PR Review Cycles - Back-and-forth iterations

  • Average PR Review Depth - Comments per PR

  • Time to Complete Review - Reviewer response time

  • Reviews per Active Coding Week - Overall activity volume

Code Quality Impact

  • % of PRs Merged with Tests - Whether unreviewed code has tests

  • PR Revert Rate - Quality consequences of merges

  • PR Diff Size - Complexity of unreviewed code

Combined Analysis Examples

High Risk Combination:

  • High % merged without review + Low % with tests + High revert rate = Critical quality issue

Process Bottleneck:

  • High % merged without review + High time-to-review = Review process too slow, developers bypassing

Maturity Indicator:

  • Low % merged without review + High review depth + Fast review time = Healthy, efficient process


Configuration Options

VCS Platform Settings (External to Span)

These settings in GitHub, GitLab, or Azure DevOps directly influence the metric:

Branch Protection Rules (Recommended)

  • Require approval before merge

  • Set minimum number of approvals

  • Require review from code owners

  • Impact: Prevents PRs from merging without review

Stale Review Dismissal

  • Whether reviews expire after new commits

  • Whether dismissed reviews can be unrequested

  • Impact: Affects "current" approval status

Code Owners

  • Automatic reviewer assignment

  • Required approval from specific owners

  • Impact: Enforces reviews on critical paths


Best Practices

Setting Up for Success

 DO:

  • Enable branch protection requiring approvals in your VCS

  • Configure bot accounts properly in Span

  • Review metric weekly/monthly with team leads

  • Investigate sudden spikes immediately

  • Use percentile rank to set realistic targets

  • Combine with test coverage and revert rate metrics

 DON'T:

  • Ignore high percentages without investigation

  • Assume all unreviewed PRs are intentional

  • Compare teams with different codebase risk levels

  • Set arbitrary percentage targets without context

  • Punish teams without understanding root causes

Action Plan for High Percentages

Immediate (If >30%):

  1. Enable required approvals in branch protection

  2. Identify repeat offenders

  3. Communicate policy to all developers

  4. Review critical recent unreviewed PRs

Short-term (If >15%):

  1. Audit branch protection settings across repos

  2. Train team on formal approval workflows

  3. Set up alerts for unreviewed merges

  4. Review team capacity and review bottlenecks

Long-term:

  1. Monitor trend monthly

  2. Include in team/org dashboards

  3. Tie to quality and incident metrics

  4. Celebrate improvements


Quick Reference by Role

Role

Key Use Case

Developer

Ensure your PRs get proper review before merge

Team Lead

Monitor team compliance with review policy

Engineering Manager

Compare review enforcement across teams

QA/Quality Lead

Track quality gate effectiveness

CTO/VP Engineering

Assess organizational code review maturity

Compliance/Security

Verify review requirements for regulated code


Troubleshooting Common Issues

"My percentage is high but we require reviews"

Possible causes:

  • Branch protection not properly configured

  • Developers have admin access and can bypass

  • Reviews being dismissed before merge

  • Bot accounts not properly marked

Solutions:

  • Audit VCS branch protection settings

  • Restrict who can dismiss reviews

  • Enable "require up-to-date branches"

  • Verify Span bot configuration

"We review PRs but they're not counted"

Possible causes:

  • Using comments instead of formal approval

  • Reviews submitted after merge

  • Reviewer accounts marked as ignored

Solutions:

  • Train team on formal approval workflow

  • Enable required approvals to prevent premature merge

  • Check ignored reviewers list in Span settings

"Automation PRs are inflating our percentage"

Solution:

  • Mark automation accounts as bots in Span settings

  • PRs authored by bots are excluded from calculation


Summary

The "% of PRs Merged Without Review" metric is a critical indicator of code review discipline and quality control effectiveness. By monitoring this metric alongside related quality and process metrics, you can:

  • Identify and address review process gaps

  • Reduce quality risks from unreviewed code

  • Improve knowledge sharing across teams

  • Ensure compliance with development standards

  • Build a stronger code review culture

Key Takeaways:

  • Lower percentages = Better review practices

  • Use breakdowns to identify specific issues

  • Combine with quality metrics for full picture

  • Enable branch protection in your VCS

  • Configure bot accounts properly in Span

  • Monitor trends over time, not just point values


Need help improving your code review practices or have questions about configuring this metric? Contact your Span customer success team or visit the Help Center.