Lines of Code
Last updated: February 3, 2026
Overview
The Total Lines of Code report measures the aggregate volume of code changes made by your development team through merged pull requests. This metric tracks code churn—the total activity of adding, removing, and modifying code—providing insights into development intensity, work patterns, and team productivity.
What This Metric Measures
Total Lines of Code tracks:
Code change volume: The cumulative sum of lines added and removed across all merged PRs
Development activity intensity: How much code is being changed over time
Code churn: The total modification activity in your codebase
Team output scope: The scale of changes being delivered
Important: This metric measures change activity, not total codebase size. It answers "How much code did we change?" not "How big is our codebase?"
What is "Lines of Code" in Span?
Definition: Total number of lines of code added or removed in merged pull requests.
Example Calculation:
10 lines added
10 lines removed
= 20 total lines of code
Understanding the Count
When you modify a line of code:
It counts as 1 line removed (old version)
It counts as 1 line added (new version)
Total: 2 lines of code (representing the churn)
This approach measures total code churn rather than net change, giving you a more accurate picture of development effort and activity.
How It's Calculated
Formula:
Total Lines of Code = SUM(lines_added + lines_removed) WHERE:
- PR status = "merged"
- Author = development contributor
- Merged date within selected time period
- Files not in ignore patterns
What's Included:
✅ Lines added in merged PRs
✅ Lines removed in merged PRs
✅ Lines modified (counted as both add + remove)
✅ All programming languages
✅ Changes by human developers only
What's Excluded:
❌ Generated code files
❌ Lock files (package-lock.json, Gemfile.lock, etc.)
❌ Build artifacts and compiled output
❌ Auto-generated API code (protobuf, gRPC)
❌ Test snapshots (Jest, etc.)
❌ Files matching custom ignore patterns
❌ Changes by bots or automated systems
❌ Unmerged or draft PRs
Example Calculation
Your team's merged PRs this sprint:
PR #1: +50 lines, -20 lines = 70 LOC
PR #2: +100 lines, -5 lines = 105 LOC
PR #3: +30 lines, -30 lines = 60 LOC
PR #4: +200 lines, -0 lines = 200 LOC
Total Lines of Code: 435
Metric Variants
Span provides three related metrics for comprehensive analysis:
1. Lines of Code (Total)
The primary metric—sum of all additions and deletions.
Use for: Overall code change volume, activity trends, team comparisons
2. Lines of Code (Added)
Counts only new lines and the "added" portion of modified lines.
Use for: Growth patterns, new feature development, codebase expansion
3. Lines of Code (Removed)
Counts only deleted lines and the "removed" portion of modified lines.
Use for: Code cleanup, refactoring activity, technical debt reduction
Pro Tip: View all three metrics together in Span's reports. They automatically appear as separate columns, giving you a complete picture of code addition, removal, and net change patterns.
What This Metric Does NOT Measure
It's important to understand the limitations:
❌ Total codebase size (use repository analysis tools)
❌ Code quality (use test coverage, defect rates)
❌ Code complexity (use cyclomatic complexity metrics)
❌ Business value (use story points, feature delivery)
❌ Developer productivity (LOC alone is insufficient)
❌ Actual effort (100 lines of complex logic ≠ 100 lines of boilerplate)
Critical: Never use Lines of Code alone to measure productivity or performance. Context, complexity, and quality matter significantly.
Where to Find This Report
Access the Total Lines of Code report from:
Team dashboards → Code activity metrics
Individual contributor views → Personal output metrics
Search for "Lines of Code" or "LOC" in the metrics navigation
Requirements
✓ Active VCS integration (GitHub, GitLab, Azure DevOps, Bitbucket)
✓ Merged PR data syncing properly
✓ Metric enabled in your organization settings
Available Breakdowns & Filters
Analyze Total Lines of Code across multiple dimensions:
Team & People Dimensions
Individual contributors (who wrote the code)
Teams and organizational groups
Job level or IC level
Job title or role
Department or job family
Location/geography
Tenure
Project Dimensions
Repositories
Repository groups
Programming languages
File types
Project assignments
Review Dimensions
By reviewer (code changes by review patterns)
Review thoroughness correlation
Time Periods
Daily: Day-by-day code activity
Weekly: Weekly code change patterns
Monthly: Month-by-month trends
Quarterly: Longer-term delivery patterns
Custom ranges: Any date range you specify
Period-over-period: Compare this period to previous
Key Use Cases
1. Measure Development Activity Intensity
Understand how much code change activity is happening over time.
Example: "Our team changed 45,000 lines of code in Q2, up from 32,000 in Q1, reflecting increased feature development."
2. Track Velocity Trends
Monitor whether teams are delivering more or less code over time.
Example: "LOC has been steadily increasing over the past 6 months as we've scaled the team from 8 to 12 developers."
3. Comparative Analysis Across Teams
Compare code change volume across different teams (normalize by team size for fair comparison).
Example: "Team A changed 60,000 lines with 10 developers (6,000 per person) while Team B changed 40,000 lines with 5 developers (8,000 per person)."
4. Understand Work Scope Patterns
Analyze typical PR sizes and change scopes.
Example: "Our average PR is 200 lines, but we have outliers at 2,000+ lines that consistently have longer cycle times."
5. Assess Refactoring vs. New Development
Compare lines added vs. removed to understand work type.
Example: "Last sprint showed 5,000 lines added but 8,000 removed—major refactoring and cleanup effort."
6. Measure AI Coding Assistant Impact
Track how much code comes from AI tools vs. traditional development.
Example: "30% of our merged code this quarter was AI-assisted, helping accelerate routine implementations."
7. Identify Code Churn Hotspots
Find areas of the codebase with excessive modification activity.
Example: "The authentication module has 10x the LOC churn of other modules—possible instability indicator."
8. Workload and Capacity Assessment
Understand typical development scope for resource planning.
Example: "At 40,000 LOC per sprint, we can estimate 6 more sprints for the remaining feature set based on historical patterns."
How It Relates to Other Metrics
Total Lines of Code works best when analyzed alongside complementary metrics:
Related Code Metrics
Metric | Relationship | Use Together To... |
Total PRs Merged | Count vs. volume | Understand average PR size (LOC ÷ PRs) |
PR Size (P50/P75/P90) | Distribution vs. total | See typical vs. outlier PR sizes |
PR Cycle Time | Speed vs. volume | Correlate LOC with review time |
Files Changed | Breadth vs. depth | Understand change distribution |
Velocity (Weighted) | Complexity-adjusted output | Account for change significance |
AI Code Metrics
Metric | Relationship | Use Together To... |
AI Code Ratio | AI portion of total | Measure AI tool contribution % |
Total AI Lines (Accepted) | Subset of total LOC | Track AI-generated code volume |
AI Adoption Rate | Developer usage | Correlate adoption with LOC changes |
Quality Metrics
Metric | Relationship | Use Together To... |
PR Revert Rate | Quality vs. volume | Ensure speed doesn't sacrifice quality |
% PRs with Tests | Coverage vs. volume | Maintain testing discipline |
Defect Density | Bugs per LOC | Track quality at scale |
Powerful Analysis Combinations:
Total LOC + PR Count = Average PR Size
LOC Added - LOC Removed = Net Code Growth
Total LOC + PR Cycle Time = Efficiency Analysis
(High LOC + Low Cycle Time = Efficient; High LOC + High Cycle Time = Bottleneck)
Total LOC + AI Code Ratio = AI Impact Analysis
Insights You Can Gain
Development Intensity Patterns
How much code are we changing? Baseline activity level
Is activity increasing or decreasing? Trend analysis
When are we most active? Seasonal patterns
Which teams change the most code? Resource allocation insights
Work Pattern Analysis
Large PRs vs. small PRs? (Total LOC ÷ PR Count)
Adding vs. removing code? (Added vs. Removed comparison)
Refactoring vs. new development? (Net change analysis)
Code churn hotspots? (Repeated changes to same areas)
Team Productivity Context
Code output per developer (LOC ÷ Active Contributors)
Activity distribution (Are changes concentrated or distributed?)
Productivity trends (Is output per person increasing?)
Workload balance (Is work evenly distributed?)
AI Tool Impact
What % of code is AI-assisted? (AI LOC ÷ Total LOC)
Is AI increasing total output? (Compare pre/post AI adoption)
Which languages benefit most? (LOC by language with AI support)
Quality Indicators
High LOC + High revert rate = Possible quality issues
High LOC + Low test coverage = Testing gaps
Declining LOC + Improving quality = Healthy maturation
Excessive churn = Instability or rework
Common Scenarios & Interpretations
Scenario 1: Very High LOC in Single PR
What you see: One or more PRs with 2,000+ lines changed
Implications:
Difficult to review thoroughly
Higher risk of defects
Longer cycle time
May indicate work breakdown issues
Actions:
Encourage smaller, incremental PRs
Review work breakdown practices
Consider feature flags for large changes
Monitor review quality for large PRs
Scenario 2: More Lines Removed Than Added
What you see: LOC (removed) > LOC (added)
Possible causes:
Major refactoring effort
Technical debt cleanup
Dead code removal
Consolidation of duplicate code
Feature deprecation
Interpretation: Often positive—indicates code health improvements. Monitor to ensure functionality isn't lost.
Scenario 3: Declining LOC Over Time
What you see: Total LOC trending downward over multiple periods
Possible causes:
Team capacity reduced (departures, absences)
Shift to planning/architecture work
Increased complexity (fewer lines, more thought)
Process bottlenecks slowing delivery
Mature product requiring less code change
Actions:
Check Active Contributors count
Review PR Cycle Time for bottlenecks
Assess if decline aligns with planned work
Investigate team health and blockers
Scenario 4: Spike in LOC
What you see: Sudden increase in lines changed
Possible causes:
Major feature development
Large refactoring initiative
Team size increase
Dependency updates
Generated code not properly excluded
Actions:
Verify spike is legitimate work
Check if file ignore patterns need adjustment
Ensure code review quality maintained
Monitor quality metrics during high-activity periods
Scenario 5: High LOC per Developer
What you see: Some developers consistently have much higher LOC than others
Possible causes:
Different work assignments (frontend vs. backend)
Experience level (junior vs. senior)
Work style differences (large vs. incremental PRs)
Role differences (feature dev vs. bug fixes)
Code generation tool usage
Interpretation: NOT necessarily a performance indicator. Context matters:
Senior developers might have lower LOC (more design, less code)
Some work is naturally more code-intensive
Quality and impact matter more than volume
Best Practices
1. Never Use LOC Alone for Productivity
Lines of Code is a volume metric, not a value metric:
❌ DON'T:
Use LOC as primary performance indicator
Compare developers solely on LOC
Set LOC targets or quotas
Reward high LOC without considering quality
✅ DO:
Use as context for other metrics
Combine with quality indicators
Focus on trends, not absolutes
Consider work complexity and type
2. Configure Ignore Patterns Properly
Ensure your LOC metric reflects actual business code:
Review and exclude:
Generated code files
Lock files
Build output directories
Third-party dependencies
Test snapshots
Documentation generation
Check Settings → Metrics Configuration → Ignore Patterns
3. Normalize by Team Size
When comparing teams, calculate per-person averages:
LOC per Developer = Total LOC ÷ Active Contributors
This provides fairer cross-team comparison than raw totals.
4. Analyze Added vs. Removed Separately
Don't just look at total LOC—examine the breakdown:
Pattern | Interpretation |
High added, low removed | New feature development |
High removed, low added | Code cleanup, refactoring |
Both high | Major changes, rewrites |
Both low | Bug fixes, configuration changes |
5. Monitor Average PR Size
Calculate from LOC and PR metrics:
Average PR Size = Total LOC ÷ Total PRs Merged
Guidelines:
< 200 lines: Easy to review, low risk
200-500 lines: Moderate effort, manageable
500-1000 lines: Challenging to review thoroughly
> 1000 lines: Very difficult, high risk
Encourage smaller PRs when average exceeds 400-500 lines.
6. Correlate with Quality Metrics
Always pair LOC analysis with quality indicators:
Healthy Patterns:
Increasing LOC + stable revert rate
Increasing LOC + high test coverage
Increasing LOC + low cycle time
Warning Patterns:
Increasing LOC + rising revert rate
Increasing LOC + declining test coverage
Increasing LOC + lengthening cycle time
7. Account for Language and Domain Differences
Different languages and domains naturally produce different LOC patterns:
More LOC typically needed:
Java, C# (verbose languages)
Frontend/UI development
Integration/glue code
Configuration-heavy systems
Less LOC typically needed:
Python, Ruby (concise languages)
Backend API development
Infrastructure as code
Functional programming
Don't compare raw LOC across fundamentally different technology stacks.
8. Use for Pattern Recognition, Not Judgment
LOC helps identify patterns worth investigating:
✓ Good uses:
"LOC spiked—was there a major feature?"
"LOC declining—is the team blocked?"
"High LOC per PR—should we encourage smaller changes?"
"LOC concentrated in one area—potential hotspot?"
✗ Bad uses:
"This developer has low LOC—they're underperforming"
"We need to hit 50,000 LOC this quarter"
"Higher LOC = more productive"
Language Support for AI Code Detection
When analyzing AI-assisted code within your LOC metrics, Span can detect AI patterns in these languages:
TypeScript
Python
JavaScript
Ruby
Java
C#
Go
Kotlin
For these languages, you can:
Track AI LOC vs. total LOC
Calculate AI code ratio
Measure AI tool impact on productivity
Compare AI adoption across teams
Configuring Your LOC Metrics
Admin Settings
Administrators can configure what counts toward LOC:
Metrics Ignore Patterns (Settings → Metrics Configuration):
Custom file exclusions
Directory exclusions
Pattern-based rules (wildcards supported)
Language-specific exclusions
Example patterns to exclude:
**/node_modules/**
**/dist/**
**/build/**
**/*.lock
**/*.generated.*
**/snapshots/**
Implicit Exclusions (Automatic):
System detects and excludes common generated files
Lock files automatically excluded
Build artifacts filtered out
Setup Steps
Verify VCS Integration
Settings → Integrations
Confirm GitHub/GitLab/ADO connected
Check data sync status
Configure File Exclusions
Settings → Metrics Configuration
Add ignore patterns for your tech stack
Test patterns against sample PRs
Validate Data Accuracy
Review sample PRs to verify LOC counts
Compare with VCS UI for sanity check
Adjust ignore patterns if needed
Establish Baseline
Review 4-6 weeks of historical data
Calculate typical LOC per sprint/month
Note seasonal patterns
Set realistic expectations
Frequently Asked Questions
Q: Is higher LOC better?
A: No. LOC is a volume metric, not a value metric. 100 lines of well-architected code can deliver more value than 1,000 lines of poor code. Focus on sustainable delivery with quality, not maximizing LOC.
Q: Why is Span's LOC different from GitHub's stats?
A: Span filters for:
Development contributors only (excludes bots)
Merged PRs only (excludes drafts)
File ignore patterns (excludes generated code)
Specific time periods and organizational scope
Check your filters and ignore patterns to understand differences.
Q: Should I use LOC for performance reviews?
A: No. LOC is heavily influenced by factors outside individual control:
Work assignment (feature type, complexity)
Technology choices (language verbosity)
Architecture patterns (microservices vs. monolith)
Role expectations (IC vs. senior architect)
Use it for team-level insights, not individual performance assessment.
Q: What's a "good" LOC per sprint?
A: There's no universal benchmark—it varies by:
Team size
Technology stack
Domain complexity
Development stage (greenfield vs. maintenance)
Architecture style
Focus on your baseline and whether it meets your delivery needs.
Q: Why count modified lines twice?
A: Modified lines represent code churn—both the removal of the old version and the addition of the new version. This measures total change activity, which is more meaningful for understanding development effort than net change.
Q: How do I reduce excessive LOC in PRs?
A: Encourage smaller, incremental changes:
Break features into smaller pieces
Use feature flags for large changes
Refactor in separate PRs
Set team norms (e.g., "PRs should be reviewable in 30 minutes")
Review work breakdown in sprint planning
Q: What if my team has low LOC but high story points?
A: This is often positive! It suggests:
Efficient, concise code
Good reuse of existing components
Architectural work (less code, more design)
Mature codebase (fewer changes needed)
Story points measure value; LOC measures volume. Focus on value.
Q: Can I see LOC by programming language?
A: Yes, use the language breakdown filter to see which languages account for the most code changes. This helps understand team focus areas and technology adoption.
Q: How does code generation (AI tools, scaffolding) affect LOC?
A: AI-assisted code IS counted in total LOC, but Span separately tracks it in AI-specific metrics. This lets you measure both total output and AI contribution percentage.
Q: Should refactoring show up as high LOC?
A: Yes—refactoring often involves removing old code and adding refactored code, resulting in high LOC (both added and removed). This is expected and often indicates positive code health improvements.
Need Help?
For additional support with the Total Lines of Code report:
Visit the Span Help Center
Contact your Customer Success Manager
Email support@span.app
This documentation reflects Span's platform capabilities as of the current version. Features and calculations are subject to updates.