Three metrics for tracking sprint execution
For the longest time, I’ve been looking for a lightweight but meaningful way to gauge how my team is doing when it comes to execution. Over the years, I’ve tried many approaches, but one framework has stuck with me because of how much signal it gives for how a team is really operating. I was introduced to this framework at Marqeta by Kim Stroeger (thanks, Kim!) and I have been using it ever since.
The metrics
Every sprint, I track three simple metrics for my team:
Say/Do
This one’s about reliability. It’s the ratio of the number of tasks completed in the sprint to the number of tasks the team committed to when planning it.
Say/Do = (# of committed tasks completed) / (# of committed tasks at sprint start)
For example, if we start a sprint with 10 committed tasks and complete 6 of them, our Say/Do is 60%. A high Say/Do over time usually means the team understands its capacity well and follows through on its commitments. A low Say/Do can flag problems with estimation, unexpected churn, or unclear priorities.
Churn
Churn measures how much the plan changes after the sprint starts. If we added 2 tasks mid-sprint and removed 1 from the original plan of 10, our churn would be 30%.
Churn = ( (# of tasks added after sprint start) + (# of tasks removed after sprint start) ) / (# of committed tasks at sprint start)
Churn helps assess how stable the sprint plan was. If it’s consistently high, that might mean we’re discovering too much work after we start, or our sprint planning isn’t sufficiently informed.
Total Done
This one’s all about capacity understanding. It looks at the total work completed in the sprint — original tasks plus any added — relative to the original sprint plan.
Total Done = (# of tasks completed***) / (# of committed tasks at sprint start)
*** includes tasks that were originally committed at sprint start and those added after
If we start with 10 tasks, add 5, remove 1, and finish all 14, our total done is 140%. That might look great, but if it happens all the time, it could point to sandbagging or chronic under-commitment.
Tracking and Usage
I’ve written a small Google Apps Script that computes these metrics automatically based on our JIRA data (I’ll share that in a future post). But honestly, you can just as easily calculate these by looking at your sprint report in JIRA and doing the math by hand or in a spreadsheet.
These metrics aren’t something I share with upper leadership. Not because they aren’t useful, but because numbers like these can be misused. It’s tempting to reduce a team’s performance to a percentage, and that’s almost always misleading.
Instead, I use them for:
- Team retrospectives, to spark reflection.
- My own sense of team health, so I can look for signs of overload, uncertainty, or process dysfunction.
What they have revealed
Here are some real patterns I’ve seen:
- Low Say/Do + High Churn: We’re committing to one thing and then getting pulled in other directions. Usually a signal that product owners and scrum masters need to prep better or that the team is in a reactive mode (like IT support). In the latter case, I’ve even recommended switching from Scrum to Kanban.
- Low Say/Do + Low Churn: Our scope/capacity estimates are off. This is fixable with better estimation and planning.
- High Total Done (>100%) regularly: We might be sandbagging — either under-committing or inflating estimates. Seen this more than once.
A pinch of salt…
If you’re an engineering manager or a scrum master, I encourage you to try tracking these and, more importantly, share the data with your team. Done right, this kind of reflection can massively improve how responsibly and confidently a team commits to work.
But, please don’t make these mistakes:
Don’t use numbers to judge: These metrics start conversations. They don’t tell the full story. I’ve seen teams with perfect Say/Do ratios that were gaming the system by hiding work on secret JIRA boards or backlogs. And I’ve seen teams with “bad” metrics absolutely crushing critical customer deliverables. Context matters more than percentages.
Look for patterns, not one-offs: A low sprint Say/Do once? Probably fine. Three times in a row with a corresponding trend in churn? That’s something to dig into.
Don’t let metrics override agility: It’s okay to change course mid-sprint if something high-impact comes up. Don’t reject that just to protect your churn number. The point of scrum is being agile, not being rigidly predictable.
These metrics are easy to track, simple to understand, and incredibly powerful when used with care. Just remember: they’re a mirror, not a scoreboard. Use them to see what’s really going on and to help your team get better every sprint.