Predicting the future: Crystal balls, tea leaves, and product estimation
How long will it take? How much will it cost? If you’re responsible for answering these questions, this session is for you. Learn how to use lean requirements, historical velocity, volatility, round-tripping, and other delivery metrics to increase confidence in the commitments the product team makes to the business.
Watch the talk:
Transcript:
Hey everyone, I'm Laura Graves, Managing Director at Devbridge Group. And we're here today to talk about everybody's favorite topic, product estimation. Because, at the end of the day, what we all want to know is how long is it going to take? And how much is it going to cost? And when I start talking about that, I think what most people hear is I'm asking you to predict the future, right? So let's talk about it. Predicting the future is really nothing new. People have been coming up with all kinds of crazy ways to deal with our fear of the unknown for as long as any of us can remember. They look in the bottom of coffee mugs, they look into crystal balls and my personal favorite into the bottom of those shaky magic eight balls, where you always get an answer. But, we can probably all agree that as business leaders, this is not how we want our teams to come up with the estimates that they bring back to us.
So why does it sometimes feel like the answers we get came out of one of these hocus-pocusy type of solutions? Well, let's think about that. The reality is that as business leaders, we don't have infinite piles of money, right? There's always a budget and we always have to think about how much investment are we going to make in any one stream of work?
We ask our teams to come back to us with an estimate, and we hope that they're using a tool other than this shaky magic eight ball that you see in the middle. All right, so why is it so hard? And why do we get the kinds of answers that sound like, well, I don't know it depends, or let me get back to you or, I can give you an answer, but I'm going to have to buffer it somehow. Well, it's because estimation is scary, right?
And why is it scary? Partially because we've made it that way for our teams and we have to stop and unlearn some of the behaviors that we've been encouraging or even doing ourselves that put our teams in a compromised position when delivering the estimates.
Teams don't want to look stupid. So if you're using estimates as a weapon, and you're somehow punishing them or hanging it over their head, you're probably not going to get the best output from the team when you're asking them for the information that's so critical to the decision that you're making. So don't make them feel stupid.
The second thing is that the environmental circumstances can be scary, right? What happens if they get the estimate wrong? Well, if they don't ask for enough money, are they going to run out and not actually be able to finish the features that are so important to bring this product to market and actually give the end-user or the customer, what they need in order for the product to be successful.
If they ask for too much money, is there some sort of opportunity costs? So something else that's not getting invested in that was really important for making this a marketable solution. And in probably the most dire straits, is there some kind of consequence to their own job security, to their ability to get a promotion that's being tied to whether or not they deliver on the estimate that they gave to you. Or is the business going to hit the end of the runway? Because you're simply going to run out of money because you haven't made the right kind of investment decisions to actually bring your product to market.
At the end of the day, we need to acknowledge the real reasons why things are scary and put to bed some of the stuff that's just noise, that's created by a negative culture or politics in your company, and actually get down to the business of actually doing the work of empowering you to make the right investment decisions.
So how do we do that? Well, I wouldn't be up here today talking to all of you about this, if we always got it right. But the reality is that you don't actually learn by always getting it right. You learn a lot more from getting it wrong. Let me tell you about a time that me and the team got it really wrong.
A new client came to us and they asked us to estimate a piece of work. And what we were going to do was re-platform a piece of software on legacy technology to a brand new updated tech stack. And we said, cool, let's take a look at what's there, let's go through our normal estimation activities and we'll come back to you and let you know what it's going to take. Our original estimate was 16,000 hours and you know, that's a lot of work, but we figured we could do it.
Now, we'll come back to this later. We didn't actually estimate in hours, but it's the easiest way to take you through this example. What happened when we got to work and really got beyond that surface level expiration of what we needed to do. Well, we found two X more work. The all-in costs and timeline of actually delivering this were almost three times what we expected when we set out to do the work. And on the scale of that being 50,000 hours worth of work, that has a real impact to both costs and timeline. We together had come around the table and thought about the potential return on investment of actually doing this work and figured it made sense. But when the numbers change this drastically, you really start to question those investment decisions that you've made. And so we needed collectively to go back to the drawing board and think about how we approach estimates.
And that's really what I'm here to talk to you about today. So it probably doesn't surprise you that we didn't decide to go look in a crystal ball to come up with a better prediction. We actually said, let's rethink the way we think about doing estimates. And if all of you take a minute, you can probably think of one place in our everyday lives where we actually accept predicting the future, not as some hocus-pocusy type of thing, but just part of our regular everyday lives that we all accept. And we all in fact make decisions based on these predictions that we receive. Can you guess? If you're in Chicago, it's probably what you get from this man every day on WGN, our friend, Tom Skilling. And if you're not in Chicago, this is our local weatherman, right? So what's the difference? Why do we accept his future predictions? But we don't accept what comes out of a crystal ball. What's because he's not predicting the future, he's forecasting the future.
And there really is an important difference there. Let's think about it. When weatherman tells us, what it's going to look like today, this weekend in 10 days, he also gives us transparency into how he came to that conclusion, right? He shows us lots of data. He might show us historical information and he keeps us updated. What he tells us Monday about the weekend is different than what he tells us Friday and different when he tells us Saturday morning. How can we start to incorporate that type of thinking into the way that we approach product estimation?
Here we go. What are those inputs help us to forecast how much a particular product is going to take when it comes to investment and when it comes to the length of time it takes to deliver? Well, there's a series of inputs that help us to determine, and the more we can be transparent with not just the teams that are creating the estimate, but the business stakeholders and the folks writing the check, the better off we'll be in all understanding where that estimate came from.
So here's what we do at Devbridge. We do a workshop where we bring everybody into the same room. We collectively build a story map to represent the scope. And then we talk about assumptions and risks before we even get to delivering the estimate.
Let's step through that. Here's a workshop. What is this? Well, it's everybody all in the same room, building a shared understanding, not just what we're going to build, but of why we're going to build it. And in the room, you've got representatives of the technology team. You've got representatives from engineering, product management, product design, and also people who can truly speak for the users and folks who can speak for where's the money going to come from. And how is this really going to drive an impact to your business? If you can get everybody together in a room, we really get a better understanding of what we're going to build, why we're going to build it, why it's so important, and then all collectively make the right decisions about how to approach the problem.
Once we've done that, we actually come out with this. It's a story map, and this is a visual asset that we can all look back on and understand, what did we actually talk about that day? What did we agree to build? What did we think was going to be the important set of features that would help us to accomplish the business goals that we needed?
What's next? Assumptions. Remember I talked about estimation is scary for the teams. Part of why it's scary is because there's always going to be a significant amount of unknown before you actually do the work. And there's basically no way that you're going to be able to get all of the answers before you get started. Why? Well it's for two reasons. One, you basically have to do the work to figure it out. And two, because it's cost prohibitive, the amount that you would have to invest to get that level of certainty, isn't really worth it.
Instead let's state our assumptions and all agree that we understand that the estimate that gets delivered is based on these assumptions. And if they change, then the estimate might change too. Some of these things might be what browsers we're going to support, what devices we're going to support, assumptions around integrations. All of these things could significantly impact the total delivery effort required. Let's just agree on the basis for our estimates. Similarly, we have to talk about the risks and these are different than assumptions because these have to do not always just with technology risks, but also with interpersonal risks, risks, collaborating across multiple different companies. And so if we can acknowledge those upfront, we can all think collectively about how this might impact our ability to deliver. And only then can we get into actually delivering an estimate.
I'm going to pause here and get a little bit more tactical about how do you actually go about building an estimate, because I've seen a lot of different approaches with many different teams.
Let's talk about it. Once we actually know what we're going to build, how do we go about quantifying how much time that's going to take? One option is T-shirt sizing and everybody loves to T-shirt size. Why? Because it's a lot less scary, right? Hey, it's, this is small, it's large, it's extra small. You're not really committing to anything concrete. It feels safer, right? Wrong. Why? Because the engineers and the people doing the work sit around and give you a T-shirt sizes. But by the time it gets all the way up the food chain, somebody wants to actually see an actual dollar amount and an actual amount of time. So somebody converted those T-shirts sizes into some sort of time or dollars equivalent. It could be a product manager, a project manager, an engineering manager. But the point is the team that actually built the estimate isn't in the room when that translation happens and that itself introduces a whole lot of risk into the equation.
What else can we do? The other two things you can do are story points or team sprints. Now these are great methods to use, if you have a stable team, that means the team has already been working together. They've established a fairly consistent velocity, right? So that means not just how much they're delivering in every sprint, but how consistently they're delivering, right? It's not volatile. If they're already in that rhythm, they can probably look at a new piece of work and give you a fairly accurate estimation in story points, which is a measure of relative complexity or team sprint's, meaning how much work can the team take in, in a particular sprint and help you come up with the timeline for delivery? The problem is a lot of times we're asking teams to do these estimates as a side project. Maybe it doesn't involve the whole team. Maybe you're mixing folks together from different teams. And so story points don't necessarily translate one-to-one between teams or teams that are newly forming haven't that regular cadence and not consistent velocity.
And so if you ask them to actually deliver an estimate this way again, you've got that added layer of translating. Okay is one story point, one day, how do we think about this? And so you're better off skipping that layer of abstraction and just being more straightforward.
Person hours. See how that one's red. This is a bad idea. Don't do this, don't do this because it's way too easy to use person hours as a weapon after the fact, it's too easy to point at something and say, you said this was only going to be five hours worth of work. Why did it take you seven and a half? Or why did it take you 10? It's too specific.
And being specific is also a problem because you're going to waste a lot of time arguing about, okay, is this five hours? Is it four hours? Is it three hours? Nobody really knows. And at the end of the day, it's not going to have a meaningful impact on the output. You're just going to invest lots of extra cycles and time in a part of the conversation that doesn't produce a lot of value.
So what do you do? If you don't have a team that's been working together consistently, or you don't have a team that has an established velocity think about doing an in-person days. A day is big enough that it's non-threatening for people to talk about. You're not going to argue about the minutia, but you can get a general sense of how long something's going to take. Okay, great. So now we've got all the inputs.
How do we calculate the estimate? Well, we know how many person days worth of work it's going to be. We figure out what the team size should be, right? And that's team size and team composition, right? Do you need backend engineers? Do you need front end engineers? Testing, full stack engineers, DevOps, all of that. And by the way, all of that should go into the person days estimate. So person days of work, size of the team and then number of sprints. So we have all the inputs. How do we actually calculate the estimate? Well, if the team was able to give you number of sprints, right from the start based on their story points, that's great. But if they weren't and they gave you person days, then you need to think about how many days did the team estimate. And then what's the team size that's actually going to be able to do this work effectively?
We all know that adding more engineers doesn't always mean that we can move faster. So really ask the team's opinion, how many people can work on this, how much can be parallelized to get the work done? And once you know the number of days and the team size, you can actually calculate the number of sprints that it's going to take. And once you know the number of sprints, you should be able to figure out the cost.
Now here at Devbridge, we're a consulting company. And so we're pretty transparent about how much time an hour of work costs. If you don't have that information inside of a product company, you can use some sort of approximation for a single day of engineering time or a sprint worth of engineering to actually come up with this number.
And I would argue that even if you're not a consulting company, this is a very piece of the information for you to provide to the business leaders, because they really do need to think about this in terms of the return on investment and without understanding the costs that it takes to bring this feature, to market, the product to market, or start on this new venture, they really don't have enough information to weigh that decision. So you've got the number of sprints. You have the number of people are full-time employees, full-time equivalents involved. The rate that you're using, which is either real or approximated, and you can come up with how much it costs. Now we've got the two important pieces of an estimate. How long is it going to take? And how much is it going to cost? Are you still with me? Good because we're not done yet. So, that's great. How'd we still get it so wrong the first time around, right? That whole iceberg thing I showed you guys where we were way under.
Well, there's a reality here, which is that the larger, the piece of work that you're trying to estimate the much harder it is to be accurate because you're asking the team to intake and absorb so much information that they can't possibly get to the same level, understanding and granularity across that whole curve. So what do we need to do? We really need to allow them to break it out into smaller chunks, which are easier to deal with that's one, and two, we have to think about what other tools can we give the team to help them with this problem because we can all acknowledge that it's real. So what did we do? Remember these inputs? We have the scope. We have assumptions. We have risk modifiers. We produce the estimates, we still got it wrong.
So here's what we think. We're adding a step to our process here at Devbridge to make sure we really do our due diligence on how we're going to build the solution and giving the team not just the time to think about the why and the what, but actually spend some more time getting into the mechanics of how we bring the solution to life.
And we're doing that by taking this story map and converting it into an entity diagram. Why? Because the story map actually tells us how to look at the product through the user's eyes. What is the flow that they're going to walk through? What is the series of features? But when you actually think about how the sausage gets made, you actually are going to uncover new things about how you're going to structure the data model, which things need to talk to each other, where there's potential for overlap, where there are so many nodes that there's actually a lot of complexity to building this and you'll get better questions from the team that informed the assumptions, the risks, the scope, and ultimately estimates of what you're trying to build. From our story map that we generated all collectively in a room together, after we've done that exercise of thinking about how we're going to build it, you may actually reorganize the story map in a way that more closely approximates the backlog that you're going to end up with in JIRA or whatever your product delivery management tool is.
Great, so quick recap, because that was a lot of information. We figure out what and why we're going to build it. And we capture that in a story map, we think about how we're going to build it. And we capture that in an entity diagram. Then we want to talk through our assumptions and our risk modifiers and only then do you get the estimate? Great. What's in the estimate? It's two things. It's time and it's money.
So last thing you're on calculating that cost. You've got the number of sprints. You have the number of people. And one key thing, we haven't really dug into, a risk multiplier. You asked the team to tell you what could actually go wrong. So rather than taking a wild swing or having everybody buffer along the way, have the team actually quantify for you each of those risks and how it might impact your estimate and incorporate that when you're doing this calculation. Again, the more transparency, the better, and we can all understand what's going into the estimate because it also gives you the business leader, an opportunity to de-risk this for them, and potentially get a better read on the estimate, right?
Can you commit to always making sure that there are dedicated teams on this particular product delivery effort? If you can, then maybe the risk goes down and therefore the cost to you goes down. So give it some thought. We were talking about a weather forecast, right? Everything that I've just said is great, but we all have to remember that this is really still only going to deliver to you the 10 day forecast, right? We haven't started doing the work yet. We're looking out into the future based on everything we know. But especially here in Chicago, we all know that the 10 day forecast doesn't always tell the whole story. So then what are we going to do? Well, the work's not over. After you've delivered the estimate to your business leaders, when they green light the project, it's still your responsibility to continue to deliver more information about the forecast.
So how do you do that? And this goes back to that story of what went wrong and how we get better. There is absolutely no way we just said, oops, sorry we were three times off. Trust me that doesn't fly here. And it doesn't fly with the clients we work with. So we actually thought about, how can we give even better information? How can we dig down and understand the root cause of what's going on and better inform ourselves about where things went sideways. I'm going to spare you the 50 slide presentation on everything that we looked into. But the root of this is really taking a look at the data that you have around delivery, whether it's healthy or unhealthy, where are you spending your time and using that to inform the way that you do future estimates, because clearly there were things that weren't accounted for.
It might be, did you account for the time on technical design, did you account for the time on testing? What did you miss? You're not going to know unless you can look at the data. Here's one slide from that deck. We actually took a look at how all of our delivery time was distributed across many different areas that go into development. And now when we're doing our estimates, we think about this as an anchor of whether or not we've actually accounted for that time for code reviews when building the estimate. Looking at history is one thing, and it definitely helps inform what's happening, but how do we turn that history into an actionable forecast? Not just for the estimation process, but also during delivery. In other words, what's our weather radar. Well, we've come up with a sprint health report and actually this tells us as we proceed through the delivery effort, how are things trending? How are they forecasting? What are we looking at? Well, we're looking at qualitative information from the team, and then we're looking at quantitative data.
What does our sprint burn up look like? How much progress are we making against the total goal? And is that in line or out of line with how much we're spending. This one looks a little out of whack, right? We have only burned through about 63% of the scope, but we've spent 79% of the budget. Maybe we need to take a look at what's going on and understand where we're getting off track. You can also look at things like feature burnup. You identify it as a group particular features, how are we making progress in each of those lanes? And is there an opportunity to better optimize and then look at where the time is going, right? Is it being spent on development?
Is it being spent on design? Are you going through a lot of turn and actually coming to a decision of what you're going to build? How can you improve the grooming process to make sure that everything that's getting to development is actually ready?
All good things to think about. At the end of the day, you have the budget, which is going to be set by the business leaders. You have an estimate which you're going to come up with at the beginning before you even start. And then you have the forecast, which is actually something that needs to get regularly updated as you start doing the work and as you have more information.
I'll leave you with this. This is the weekly forecast for today. It's not looking so hot over the weekend on Sunday, scattered thunderstorms. You might want to have a raincoat or wake up in the morning and see what Tom Skilling says. Thanks for joining us.
About the event:
Full Stack Friday is a monthly, meet up hosted by Devbridge at our Chicago headquarters. We get together to indulge in a delicious pancake breakfast and talk shop. We bring in guest speakers to present talks focused on issues relevant to product people and full stack teams. For more information about the next event, contact us directly.