Estimate Completed / Week Report
Last updated: January 27, 2026
Overview
The Estimate completed/week metric measures story points completed per week, normalized by active contributor days. This metric provides an effort-based view of team velocity that accounts for work complexity and time off, making it ideal for capacity planning when your team uses story point estimates.
At a glance:
What it measures: Story points completed per week per active contributor
Why it matters: Tracks velocity based on effort/complexity, not just issue count
Metric type: Velocity indicator (effort-based)
Format: Decimal (e.g., 23.5 points/week)
How It's Calculated
Estimate Completed/Week = (Total Story Points Completed / Active Days) × 7Components:
Total Story Points Completed: Sum of all story point estimates for issues marked as "Done"
Formula:
SUM(estimate)wherenormalized_status = 'Done'
Active Days: Total number of days worked by all team members, excluding:
Out-of-office (OOO) days
Weekends
Inactive contributors
Important: Only issues with explicit story point estimates are included. Issues without estimates contribute 0 to the calculation.
Why Use Estimate-Based Throughput?
The Problem with Count-Based Metrics
Consider these two scenarios:
Team A (count-based view):
Completes 20 issues in a week
Throughput: 20 issues/week
Looks highly productive!
Team A (estimate-based view):
Those 20 issues = 15 story points (0.75 points per issue average)
Estimate throughput: 15 points/week
Reveals they completed mostly small, simple issues
Team B (count-based view):
Completes 8 issues in a week
Throughput: 8 issues/week
Looks less productive...
Team B (estimate-based view):
Those 8 issues = 40 story points (5 points per issue average)
Estimate throughput: 40 points/week
Reveals they completed complex, high-value work
The Value of Estimate-Based Metrics
Estimate completed/week tells you about capacity in terms of effort, not just activity:
✅ Better for heterogeneous work: When your team handles both trivial bugs and major features, estimates account for complexity
✅ More accurate forecasting: "We can do 30 points/sprint" is more reliable than "we can do 15 issues/sprint" when issue size varies
✅ Fairer comparisons: Compare teams or time periods based on effort, not just issue count
✅ Planning accuracy: Aligns with sprint planning practices that use story points
Estimate vs. Issue Count Throughput: Which Should You Use?
Use Estimate Completed/Week When... | Use Issues Completed/Week When... |
✅ Your team consistently estimates all work with story points | ✅ Your team doesn't use story points or estimates inconsistently |
✅ Issue complexity varies significantly (mix of small bugs and large features) | ✅ Issues are relatively uniform in size and effort |
✅ You use story point-based sprint planning | ✅ You focus on throughput and flow rather than capacity |
✅ You need to compare effort/capacity across teams | ✅ You want simpler metrics without estimation overhead |
✅ Your retrospectives discuss "velocity" in points | ✅ Your team prefers count-based metrics |
Best Practice: Use both! They provide complementary insights:
Estimate throughput shows capacity and effort
Issue throughput shows flow and productivity
Finding the Report
Navigation:
Go to Productivity in the main sidebar
Select Velocity
Choose Issue Lifecycle or Issue Tracking
The Estimate completed/week metric appears alongside related metrics:
Issues completed/week (count-based version)
Issue Completion Rate
Total Estimate Completed (raw points without normalization)
Issue Cycle Time
Requirements:
Project management integration (Jira, Linear, GitHub Issues, Azure DevOps, etc.)
Story point estimates populated on issues
Active employee tracking (for OOO normalization)
Understanding the Data
What Counts in This Metric?
Included:
✅ Issues marked as "Done" status
✅ Issues with explicit story point estimates
✅ All integrated project management tools (Jira, Linear, etc.)
✅ Normalized for active contributor days (OOO excluded)
Excluded:
❌ Issues without story point estimates (contribute 0)
❌ Issues in "To Do" or "In Progress" status
❌ Days when contributors were out-of-office
❌ Work not tracked in your issue management system
How Missing Estimates Affect Results
This metric only includes issues with estimates. If estimate coverage is low, results may be misleading:
Example:
Week 1: Team completes 20 issues
15 issues have estimates totaling 45 points
5 issues have no estimates
Metric shows: 45 points/week (missing 5 issues' worth of work)
Check your estimate coverage: If fewer than 70-80% of your issues have estimates, consider using "Issues completed/week" instead or improving estimation practices.
Key Use Cases
1. Sprint Planning & Capacity Forecasting
Use historical velocity to plan realistic sprints:
Example:
Your team averages 32 points/week over the last 6 sprints
You have a 2-week sprint ahead
Capacity: 32 × 2 = 64 points (realistic sprint commitment)
Best for:
Setting sprint goals based on actual capacity
Avoiding over-commitment
Forecasting multi-sprint initiatives
2. Release Planning & Roadmap Forecasting
Project delivery timelines based on effort:
Example:
Epic totals 180 story points
Team velocity: 30 points/week
Forecast: 180 ÷ 30 = 6 weeks to complete
This is more accurate than counting issues when work complexity varies.
3. Team Capacity Comparison
Compare teams fairly based on effort, not just issue count:
Without normalization:
Team A: 25 issues/week
Team B: 18 issues/week
Conclusion: Team A is more productive?
With estimate-based throughput:
Team A: 30 points/week (small, simple issues)
Team B: 55 points/week (complex features)
Conclusion: Team B has higher capacity for complex work
4. Measuring Process Improvements
Track velocity changes after implementing improvements:
Before process change:
Baseline: 25 points/week
After implementing continuous integration:
New velocity: 32 points/week
Improvement: 28% increase in capacity
5. OOO-Adjusted Velocity Tracking
Get fair comparisons across periods with different vacation levels:
Without OOO adjustment:
December: 40 points completed (looks low)
January: 80 points completed (looks high)
With OOO adjustment (normalized per active day):
December: 30 points/week (many team members on vacation)
January: 32 points/week (similar velocity, just more active days)
The metric automatically accounts for time off, giving you apples-to-apples comparisons.
6. Identifying Velocity Trends
Track whether your team is speeding up, slowing down, or staying consistent:
Increasing velocity: Process improvements working, team maturing
Decreasing velocity: Technical debt accumulating, team capacity issues
Stable velocity: Predictable delivery, healthy sustainable pace
Related Metrics
Use these complementary metrics for complete insight:
Metric | What it Shows | How It Complements Estimate Throughput |
Issues completed/week | Count-based throughput | Shows volume vs. effort. High issue count + low estimate throughput = many small issues. |
Total Estimate Completed | Raw story points (not normalized) | Shows absolute point totals without weekly normalization. |
Issue Completion Rate | Percentage of issues marked Done | Shows progress vs. velocity. You can have high velocity but low completion if backlog grows faster. |
Issue Cycle Time | Average time per issue | Inversely related: faster cycle time enables higher throughput. |
Time in To Do / In Progress | Time in each workflow stage | Reveals bottlenecks limiting velocity. |
Average Estimate per Issue | Points per issue completed | Shows issue complexity trends. Rising = work getting harder. |
Interpreting Your Results
Healthy Patterns ✅
Stable or increasing velocity combined with:
Consistent points/week with low variance (±10%)
Reasonable cycle times
High estimate coverage (>80% of issues estimated)
Sustainable pace (not at expense of quality or burnout)
Indicates: Predictable delivery, healthy team capacity, accurate estimation
Warning Signs ⚠
Declining velocity may indicate:
Technical debt accumulation: Code getting harder to change
Team capacity issues: Reduced staffing or increased OOO
Scope creep: Issues growing in size (check average estimate per issue)
Process bottlenecks: Check cycle time and time-in-stage metrics
Estimation inflation: Teams inflating estimates (check issue count throughput too)
Very high velocity may indicate:
Estimate deflation (underestimating work)
Cherry-picking easy work
Quality shortcuts (check bug rates)
Team gaming metrics
Highly variable velocity (erratic week-to-week) may indicate:
Inconsistent estimation practices
External interruptions
Large issues skewing weekly totals (consider longer time periods)
Incomplete issue updates
Low estimate coverage (<70% issues estimated) means:
Metric may not reflect true velocity
Consider using Issues completed/week instead
Improve estimation practices
Best Practices
For Teams Using Story Points
Estimate consistently: Ensure 80%+ of issues have estimates before tracking this metric
Calibrate estimates: Use planning poker or team estimation to maintain consistency
Track for 4-8 sprints: Establish baseline velocity before using for planning
Use rolling averages: Calculate 3-4 sprint average to smooth variability
Update estimates: If story points change mid-work, update the issue
For Planning & Forecasting
Apply a confidence buffer: Use 80% of average velocity for conservative planning
Account for variance: If velocity is erratic, widen your forecast ranges
Review regularly: Recalculate baseline every 3 months or after major team changes
Don't cherry-pick data: Include all sprints, even slower ones, in your baseline
For Managers
Compare context-appropriately: Only compare teams doing similar work with similar estimation scales
Monitor both metrics: Track estimate throughput AND issue count throughput
Don't weaponize velocity: Velocity is for planning, not performance reviews
Investigate significant changes: >20% drop for 2+ sprints requires investigation
Check estimate quality: If velocity is volatile, estimates may be inconsistent
For Organizations
Standardize estimation: Align teams on what story points mean (Fibonacci scale, t-shirt sizes, etc.)
Track estimate accuracy: Compare estimated vs. actual cycle time to validate estimates
Benchmark wisely: Use percentile rankings to understand relative performance
Correlate with outcomes: High velocity means nothing if quality or morale suffers
Common Questions
Q: What's a "good" velocity in points/week?
A: This varies enormously by team size, issue granularity, estimation scale, and work type. A 5-person team might do 20-50 points/week. Use Span's percentile benchmarks and establish your own baseline rather than comparing to arbitrary standards.
Q: Our velocity is inconsistent week-to-week. Why?
A: Common causes: (1) Large issues completed in some weeks but not others, (2) Estimation inconsistency, (3) Work not updated promptly, (4) Variable OOO. Solution: Look at 4-week rolling averages and ensure estimation practices are consistent.
Q: Should we use Fibonacci (1,2,3,5,8) or linear (1,2,3,4,5) story points?
A: Either works—what matters is consistency within your team. Fibonacci is popular because it acknowledges uncertainty in larger estimates. The metric works with any estimation scale.
Q: What if we use t-shirt sizes (S, M, L, XL) instead of numeric points?
A: Map t-shirt sizes to numeric values in your project management tool (e.g., S=1, M=3, L=5, XL=8). The metric will then work normally.
Q: Can I use this metric if only some team members estimate issues?
A: Only if estimate coverage is high (>80% of completed issues). If coverage is low, the metric will undercount velocity. Consider using "Issues completed/week" instead or improving estimation practices.
Q: Our velocity dropped after adding a new team member. Why?
A: Common and expected! New team members need onboarding time, slow existing members (mentoring), and may complete less work initially. Velocity typically recovers and increases after 1-2 months.
Q: Should velocity always increase over time?
A: Not necessarily. Stable velocity at a sustainable pace is healthy. Constantly increasing velocity may indicate: (1) team maturity, (2) process improvements, OR (3) estimation deflation, cutting corners, or burnout. Track alongside quality metrics.
Q: Can I compare velocities across different teams?
A: Use with caution. Only meaningful if teams: (1) use the same estimation scale, (2) estimate similarly, (3) work on similar types of issues. Even then, velocity is primarily for within-team planning, not cross-team comparison.
Q: How does this metric handle issues without estimates?
A: They're excluded (contribute 0 points). If you complete 10 issues but only 7 have estimates, the metric only counts those 7 issues' points.
Troubleshooting
"My estimate throughput seems too low"
Check:
✓ What percentage of completed issues have estimates? (Should be >80%)
✓ Are estimates updated if work size changes?
✓ Are issues being marked "Done" promptly?
Action:
Review estimate coverage in your project management tool
Compare with "Issues completed/week" to see if it's an estimation issue
Check if large issues are being broken down appropriately
"Estimate throughput doesn't match issue throughput"
Example: 30 issues completed/week but only 20 points/week
This means: Your completed issues average 0.67 points each (small issues)
Check:
✓ Are you completing mostly bugs/small tasks vs. features?
✓ Are estimates too conservative?
✓ Are large issues being broken down into smaller pieces?
Action: This divergence is normal and expected—it's why both metrics are valuable!
"Our velocity is extremely variable"
Example: 20 pts/week, then 50 pts/week, then 15 pts/week
Check:
✓ Are large issues (8+ points) skewing weekly totals?
✓ Is estimation consistent across the team?
✓ Are issues updated promptly when completed?
Action:
Use 4-week rolling averages to smooth volatility
Consider breaking down large issues for more consistent flow
Review estimation practices in retrospectives
"Velocity dropped suddenly"
Check:
✓ Did team size decrease or OOO increase significantly?
✓ Did issue complexity increase (check average estimate per issue)?
✓ Are there new bottlenecks (check cycle time and time-in-stage)?
✓ Is estimate coverage still high?
Action: Investigate immediately—sustained drops signal real capacity issues
"Estimate coverage is low"
Example: Only 40% of completed issues have estimates
Problem: The metric is unreliable because it's missing most work
Action:
Switch to "Issues completed/week" until estimation improves
Implement estimation practices in sprint planning
Make estimation a requirement for work to be "Ready"
Track estimate coverage as a separate health metric
Getting Started
Step 1: Verify Estimate Coverage
Before relying on this metric, check what percentage of your completed issues have estimates. You need >80% coverage for reliable data.
Step 2: Establish Your Baseline
Track velocity for 4-8 weeks (2-4 sprints) to understand your team's typical capacity.
Step 3: Calculate Your Planning Velocity
Use a rolling 3-4 sprint average and apply a confidence buffer (typically 80-90% of average).
Example:
Last 4 sprints: 28, 32, 30, 26 points/week
Average: 29 points/week
Planning velocity (80% buffer): 23 points/week
Step 4: Use for Sprint Planning
In sprint planning, commit to work that fits within your planning velocity.
Step 5: Monitor Trends
Review velocity after each sprint. Investigate changes >15-20%.
Step 6: Iterate
Adjust your baseline every 3 months or after major team changes.
Integration with Agile Practices
For Scrum Teams
Use estimate throughput to determine sprint capacity
Track velocity on your sprint dashboard
Discuss velocity trends in retrospectives
Use for release planning and roadmap forecasting
For Kanban Teams
Use estimate throughput to measure flow efficiency
Set WIP limits based on sustainable velocity
Monitor for declining velocity (signals bottlenecks)
Track alongside cycle time for complete flow metrics
For Hybrid/Scrumban Teams
Use velocity for periodic planning cycles
Monitor continuously for flow health
Combine with cycle time and completion rate for comprehensive view
Need Help?
If you have questions about interpreting your Estimate completed/week metrics or want guidance on improving estimation practices and velocity tracking, reach out to your Span customer success manager.