Product Management

Velocity is a rubbish metric. (Fight me.)

You are using velocity wrong.

Cross-functional teams working on both new and existing products need more than a team velocity to understand workload and help drive product roadmaps and release plans. The myopic focus of Agile purists on this metric is harmful to healthy rapid development practices. We can do better!

Erm, I’m not sure what velocity is…

Welcome to Agile! Velocity is a numeric representation of the work effort performed by a team. On scrum-based projects, this number is used to help understand how much work can be accomplished within a sprint. Each work item is assigned an effort value, the team commits to a target total, and then the fun begins. At the end of the sprint, the target is compared to the actual effort completed to inform the iterative planning. Initially, the goals are educated guesses. As the effort matures, a good guideline is to use the average of the last three sprints to help decide on your target. An important note: this is a group measure, not an indication of individual contribution. While adding another person to a 5-person team will increase bandwidth, it doesn’t actually translate to a 20% increase in velocity.

Why people like velocity

Everyone loves numbers. They are easy to read, easy to compare, make lovely charts and graphs, and make us all feel wonderful when they trend in the right direction. BUT—it’s important to use them to measure the right elements in order to tell an accurate story.

One big challenge with Agile adoption on technical projects is the discomfort with what seem like ambiguous progress measurements. The unfortunate result is that in an effort to exert control and reduce ambiguity, stakeholders try to use velocity to replace the more traditional waterfall metrics—which is essentially applying the most broken part of waterfall to Agile. (Anyone who has been on a multi-year, gargantuan waterfall project can tell you that the reports showed without a shadow of a doubt that the entire project stagnated, was over budget, behind schedule, and no longer relevant to the business unit that funded it. Hooray for clear metrics!)

Those who have been on Agile projects—especially as n00bs—can testify to the roller coaster of emotions from feeling anxious every moment between sprint planning and demo, elated by demo results, confused by refinement sessions for a massive backlog, and then anxious all over until a relatively successful release that no one but the scrum team leads felt confident would occur. (I have no idea why people think software is a stressful field…)

All kidding aside, it is important to measure project and product trajectory and success. With Sprint Velocity, it is very clear what number was planned and what number was achieved—which can be very enticing to someone looking for a way to demonstrate project health. Velocity is most useful as a tool for helping cross-functional teams set goals for a fixed duration of effort. The division of what could otherwise be an overwhelming set of work into increments allows for the team to set realistic expectations and achievable goals. This process also facilitates micro-wins. Some sprints will exceed expectations, which helps maintain morale and team engagement and result in the overall project success.

Why velocity is rubbish

Great, so what’s the problem? There are several challenges when relying on velocity as a primary project metric. The biggest issue is that the measure itself is misleading. The nature of Agile estimation is comparative and iterative. In the example below, the team has dramatically increased its velocity sprint over sprint. Someone analysing this chart from a strictly numerical perspective could easily draw the conclusion that the next sprint will show at least another 20% increase.

BUT—here’s what really happened:

The most common reasons for a downtick in velocity

  • Timing within the overall project: In Sprints 0-3, the team is still forming and becoming familiar with the solution. Assumptions made during planning need to be vetted, environments set up, questions asked, and scope clarified. It will take some time to settle into a predictable cadence.
  • Big, unexpected blocker: Environment outages, incompatible integration paths, the only person who knows a critical piece of information is in Barbados. Surprises like these can really impact progress. Keep in mind that even if blocked items can be paused and other work picked up, it might be too far along in the Sprint to finish and that you’ve still burned time discussing and deciding what to do.
  • Unclear requirements: You can’t build it fast if you don’t know what it is supposed to do or be. The back-and-forth on clarifications is often necessary, but can also impede fast progress.
  • Not all sprints are created equal: Despite the team’s best intentions with estimation, if there are a disproportionate number of more simple or more complex stories, results may vary. A sprint with many clearly-defined, low point-value items may be much more achievable than a sprint with a small number of highly complex stories. Large stories have more risk for blockers or questions and are much more likely to carry over into another sprint. On paper, this looks like a huge productivity hit, but in reality, excellent progress was made toward the overall product release.
  • Shift in team makeup or availability: Any time team members are added or rolled off or a someone is out for a significant portion of a sprint, the working dynamic is disrupted. This can increase or decrease velocity.
  • Changes mid-sprint: This one is absolutely my kryptonite. We all want to please our stakeholders and overdeliver. It is so tempting to accept a tiny modification or just one more little story. The hard truth is that even small changes require discussion, decision, and execution that can derail progress on other items and distract from the team’s primary goals. It’s poor methodology, it is expensive, and this siren song lures me in at least once per project. Be stronger than I am!

One bad sprint is not a harbinger of doom, nor should it be completely ignored.  Because of these variables, velocity is a poor metric for informing tactical decisions. However, velocity IS excellent for demonstrating a pattern over time. Averaging the number of story points a team can take on over multiple sprints is an excellent way to help assess team health, overall trajectory, and determine whether adjustments are needed. Seems simple enough, yet the industry is rife with examples of misuse.

Top ways to ruin your project with velocity metrics

  • Assuming that velocity won’t change. Keep in mind that velocity is a planning tool, not a precise measure. Trying to benchmark against a number that is intended to vary by design is impossible.
  • Using velocity as the only predictor of progress on large projects. A successful product has little to do with an arbitrary numeric planning tool and everything to do with building out the right features in a timely fashion.
  • Pressuring a team to hit a particular velocity without examining the root cause of variances. This (unfortunately common) mistake leads to estimate padding and low morale.
  • Punishing the engineer with the lowest velocity. Velocity is a team metric, not a measurement of individual contributions. As a seasoned Agile professional and presumed human, I struggle to understand why anyone would think this is a good idea. This needs to stop immediately. /soapbox-rant

How we can do better

Great, we’ve poked holes in the common uses—but how can we fix the problem? Velocity is essential to effective Agile practice, but it cannot stand alone. It is important that velocity is messaged in the right manner along with complementary pieces of information to provide a comprehensive story. Some other important considerations include:

  • Total time elapsed over planned project duration (e.g. We are in Sprint 6 of 10. We are three months away from a big industry conference.)
  • For projects with budget guidelines, percent of budget spent or remaining budget
  • If the total scope is known, approximately how much scope is complete versus remaining
  • If sizing is not available, measures such as count of stories completed and remaining within this project or planned release
  • Count of blocked or flagged stories
  • (For more tactical planning) The actual resources are available for the sprint (If 50% of your team is out at a conference for a week, your target velocity should be adjusted accordingly.)
  • (For advanced metrics) Cycle time, the total time to complete stories including pauses for blockers, and standard deviation over time

Real-world example

On a recent project, the team discovered two terrifying things by the middle of our second official sprint. 1. A big assumption we made about reusing legacy code was dead wrong (to the tune of what eventually resulted in a 25% scope increase. Ouch.) 2. We were operating at about half of the velocity we initially targeted. (Now, in a prior life in other work environments, we would have asked the team to double the velocity on the next sprint to try to recover and ignored the scope implications through wishful thinking.) Fail.

What we actually did was report the project status to stakeholders using velocity. (Gasp!) We openly shared scope changes, time elapsed, budget actuals and projections, newly updated assumptions, and release priorities. We used all of these pieces of information to collaborate on a mitigation and recovery strategy which included an adjustment of the expected velocity down to what the team thought they could actually accomplish. After that, everything went perfectly according to plan. Wrong!

After several twists and turns (including new engineers, leadership changes, environment outages, illness, and scope creep galore), we completed the project over budget and slightly behind schedule. Success! No, really. The transparent and comprehensive reporting using the full context of the process and target business value versus velocity in a vacuum verified and vilified. (Okay, I’ll stop.) The bottom line, everyone won on a release that could have been a disaster because we took the velocity measure with a grain of salt and used all of the tools available to get practical, usable results.

Summary

Ultimately, when it comes to velocity, one technical professional’s rubbish is another’s treasure. The real differentiator is knowing when and how to leverage it.