Block product development failures
Learn about the tactics to manage the three most common types of risks in product development: product viability, technical feasibility, and sustainable delivery. The methodology applies to a variety of product types, including greenfield, brownfield, and recovery initiatives.
Hear from Devbridge President, Aurimas Adomavicius.
Watch the talk:
Transcript:
Okay. So, Risk Management In Product Management. I'll get to how we think about risk management, the categories that we put the risks in as we go through product development. But I want to share a story that's particular to an actual manifestation of how risks surface in product development, large scale product development.
We have a client in financial services, and we're building a product... We're building and continue building to this day. But we're building a product that is effectively... Think about it as a Google Docs experience, that allows large public companies to file financial data to the Securities and Exchange Commission. So you essentially format your financial performance in a digital document, and then you file it through some type of a market platform.
Now, as you can imagine, building that type of experience, it's a sizable effort. So we go through an ideation workshop, we figure out what needs to be built. Then we actually build out the editor... And this is all happening, the product's not really ready to go to prime time because it's still in early conception stages. We validate that the product works, so we file with the SEC. And we're on good track to build out the initial version of the product.
While all of this is happening, we're still doing internal releases. So we're releasing internally, to beta and closed door users, but it's not in any way in a production environment. And this year, COVID hits in March. And we get a text message from our main stakeholder, from the sponsor, that he wants to talk to us about the potential of productizing the product in its current state, because a lot of the facilities that are being used to do the work that they do are shut down due to lockdown.
So a lot of people that were able to come into work and actually do this work in the office, using legacy tools, can no longer do it. And I'm sure you have seen this across the board, the remote work capabilities became important overnight, the moment that lockdowns came into effect. So we get this request... So this has been in development for a while. It's not in production. What could possibly go wrong with a pre-baked launch?
This thing is effectively not production-ready. So there's a lot of risk associated with this type of approach. Actually, if you think about Product Management in general, Product Management is all about juggling priorities and making sure that you're avoiding risks and you're getting the maximum return for the spend for the business. And you're getting the outcomes that you want.
So that type of request can completely crush a product team. It could completely derail the engagement. But we did follow all of the best practices that I'm going to talk about in this session. So we actually had build pipeline set up. We had all of our automation in place. We had a very, very robust testing framework. We had performance metrics. We had everything done to deploy on a moment's notice to a prod instance. And actually, we were able to move the product into a live instance in a day.
Now that is not a typical launch window, when a new product is going to market. But we actually got this out live in a day, and internal users started using it in a single day. And within a matter of few weeks, we actually had first client filings going live to the SEC. So that's a good example of, if you follow, and if you do what I'm going to talk about in this session, you can be ready for these things that come out of nowhere and blindside you. And you can be ready to get the product in a mature state, and get it out the door without a lot of failures along the way.
My claim and my general mindset that I want you to have as you think about risks in Product Management, is that if you think everything's fine, you simply aren't looking hard enough. You don't know what's wrong. And what you do, the mindset that you need to have, is one where you're actively scanning. Constantly, actively scanning for areas that could be slipping.
And so I really like this statement from Andy Grove. The idea is that when you have a cross-functional product team that's working on a product, especially when the product is greenfield. So, early, new build, net new. The team will get into a very complacent place because they're going through the full product development roadmap or life cycle, and they're iterating, they're building measuring, they're learning along the way... And if you don't actively scan for risks that are inbound, you're not going to be able to be ready for them.
They will take you by surprise. And surprise in Product Development, I don't think that's something that any of us want. So you need to actively scan. The recommendation that I like to make, and the practice that we like to train here at Devbridge is, first of all, you need to have a level of assessments established, or assumptions established that you're going to check. And you need to scan for risks. And you probably should be scanning for risks every couple of Sprints.
So you have the assumptions. Are you looking at the assumptions that you established in the previous roadmap planning sessions? Are they still valid? Let's check our risks. Is our performance still scalable? Are we monitoring load times? Do we know what's happening with the system as more users are getting into it? And so on, and so forth. So those are just a few examples. I'll get into more of them.
Generally speaking, I categorize risks into three large buckets. There's the product viability bucket, and that really has to do with are we building a product that's actually going to drive the outcomes for the business? Or are we just kind of...
Then there's technical feasibility, which is all about can technology accomplish what we're looking to do? And is it mature enough to be able to deliver on the promises that we're making?
Then lastly, there's the evergreen delivery. So are our processes, is our hygiene, in a good place? So the example I gave you with a client in the beginning is, we had our hygiene constantly being monitored and fine-tuned. And so the moment that a risk surfaced, or a moment that we got derailed, we were ready for it. We had all the tools and the processes defined, and in place, to be able to react to those unforeseen risks.
So let's talk about product viability. The main risk with product viability is that you build the wrong thing and no one uses it. No adoption, no value. Another risk is, the business themselves, the people who bring the money to the product team and say, "Listen, build me this..." The business may not be clear on what they want. They may not even know what they want. And sometimes people will say one thing, but actually the need of the business is very different.
And then lastly, if you can't effectively measure if the product is succeeding, then how do you know whether you're a success or a failure? In context of product development. So now, by the way, I have an example. I was thinking about what analogies could I use in this session. And so obviously, I am somewhat handicapped in terms of my hair volume. And so I found this, which I thought was hilarious. Especially in context of COVID, I thought this was absolutely amazing.
Before the Flowbee, only a skilled professional could produce a good, layered haircut. Tens of thousands have been sold to satisfied customers. Why? Because it really works!
Proper suction is the key to getting great looking hair.
Oh, this is just great. I want a Flowbee. Well, I don't need a Flowbee, but I wonder if Flowbee... I wonder if they were too early to market. Like Microsoft with their tablet, versus Apple with their tablet. I suspect if Flowbee released in March of 2020, they would be a huge hit. And maybe someone can look it up and let me know if Flowbee still sells their product out there. Maybe they still exist. I don't know. I didn't do that much research.
But anyway, when talking about product viability, that's the idea I'm getting at, is first, you probably should run an exercise which is called a service blueprint. And you should figure out where the product is going to live within that whole service ecosystem. So what is the system in which the product exists? And then does it actually solve the problems that you see in the service blueprint?
The next thing you really need is a product canvas. And you may know if you're in product, product canvas is like your guiding, single-page document that tells you what success looks like for the product. The roles, the outcomes, the metrics that you use. And obviously, you need to develop this product canvas, and then refine it as you go or you develop the product. And to do that, you have many different activities that you take on to self-inform those themes. You could do interviews with stakeholders. You could do observational studies, heuristics. You could analyze the tasks that are being completed for that particular problem that you're trying to solve.
And so if you looked at the way we start this process of discovery and definition when trying to make sure that the product that is being built is useful, we go through this life cycle of... Well, hey. Can we gather information? Can we maybe run some experiments to validate some of that information that we gathered? Because everyone is going to be biased. You interview an internal user, they're going to be biased to their own predisposition. They're used to working a certain way. Everybody hates change.
Then maybe building a prototype, validating that a prototype works the way that it's supposed to. Making sure that the technology that we're proposing for that prototype, to actually make it a product, that it's feasible. And then, and only then, you get to road-mapping, and actually baking out features, estimating, and assessing the risk. And if you do this correctly, you come up with all the facts that can be used to align the team. To make sure that everyone on the business side and in the product team are looking at the problem the same way, that they understand the problem the same way.
And that brings me to my second point. In many, many cases, the inherent problem is not about the product team not wanting to build a great product, but it is an inherent problem of organizational structure in the business. And so probably 80 to 90% of all enterprises out there are structured where the business, and the technology group, and the marketing group, and then the customer groups are all separate. And they have walls, in a way, between them. So the ownership is distributed, and there's no collaboration. The business owns the budget, but IT owns delivery, and vice versa, and it becomes a very, very unproductive environment.
The alternative to this, the model that we use and we help our clients implement when we work with them, is a cross-functional product team model where you have product tracks. And the cross-functional product teams own both the budget and the outcomes. And also very important, that the cross-functional product team own a vertical cut of the product instead of a lateral cut of the product. In other words, I were to rephrase this, you don't want a product team that owns just front-end development. You don't want a product team that owns just web services, or microservices layer. Because that way you create a lot of dependencies, and then you introduce a lot of risk through that...
A team having a dependency on another team to actually get their functionality out the door. So we always recommend vertical cuts. From a stakeholder perspective and from resolving, or mitigating some of those risks with plastic enterprise organizational structure, if you can't change the organizational structure right away, maybe that's something that you do as you demonstrate success with a product-centric approach, at least making sure that you include the stakeholders and the sponsors in the definition and planning. You also make sure that demos are actively participated in.
I always say that the sponsors and the stakeholders need to take ownership of the software that is being shipped in the demo. If there's working software being handed off, stakeholders need to sign off. There needs to be a very tactile approach to that demo. They need to take ownership of it.
And then lastly, hopefully, the success of using this model also leads the organization to change the way that they orchestrate internally. And so, that should help avoid a lot of the risks of misalignment. Lastly, I briefly touched on this in the beginning, but you need to know what success actually means and how it looks when you're building a product. So you need goals.
Every single product needs goals, and I would even recommend that when you're looking at epics, and features, and maybe even individual stories, that you can trace that story to a particular product goal or outcome that you're driving. And so when you talk about goals and outcomes, they need to be specific, measurable, and achievable. So you can't come up with a goal where you go, "Well, we're going to launch a new product, and we're going to take 80% of the market share." Well, that's vapor statements. They're not real. They're not grounded in anything.
But if you look at something very pragmatic, like say, well, in our network we have 500 doctors. And I want to monitor the number of active users from our network. So from the 500 doctors, I'm going to actively look at how many are signing into this mobile application for patient scheduling. And if I can see positive trending in the adoption, that is a very, very actionable specific way to measure the success of the product. So it's a great way to, again, to incentivize the team to drive for particular outcomes.
And as you can tell, self-governance is incredibly important when you're building products. That intrinsic team accountability and self-governance. All right, the second one is technical feasibility. Well, first of all, can it be built for the money that we have? And most things can be built. Most things, if you have enough time and money, you can solve most problems with technology. But the amount of that money that you may need to spend could be absolutely prohibitive. There's never going to be an ROI model that supports that type of investment.
Another thing is you want to do ML, machine learning, and you have no data. Well, that's a prerequisite that you need to start playing around with ML. So don't build an innovation team that's going to use ML if you haven't really been capturing and collecting the data that you need to build the models. And then last thing, there may be pre-existing conditions that you have no influence over. So it's just, well, there's a monolithic architecture of the application that's in place right now. And when we build this new product, it just won't scale. There's going to be too many dependencies.
As I mentioned, we are on the theme of products for hair. I found this one. So, it's called Great Looking Hair, and it was popular in the '90s. Essentially, it's spray-on hair. So it sounds really appealing, and they even said that you can go swimming with it. Again, I don't know. But technically, I think it worked, so they sold some. I don't know. Maybe you guys can do more research and see if this thing is still being sold out in the market.
So technically, looking at the technical risks and avoiding technical risks. What do we scan for? Well, we look for affordability. We look for maturity in the technology we're looking to use, and then we're also looking at the sustainability. So is what we're doing actually sustainable from a technology, management perspective? So I mentioned this before, if we have a dependency on a legacy platform and we scale beyond a certain threshold, that may be a risk we need to actively monitor. Because that dependency will kill us long-term.
The thing with technical feasibility, and this is different than risks with product feasibility, is that if you get the product wrong, well, there's no recovery from that. That's a fatal failure. If you get some technology wrong, that probably can be fixed through refactoring. So usually, when we think about technical risks, technical risks manifest themselves in magnitude of dollars that are going to be necessary to right the ship. It's never a failure when you talk about technical feasibility. Unless there's a really, really poor decision being made on the architecture design phase. So we always look at, okay, how can we assess the technical risks through the perspective of cost?
And the problem... And I'm sure you all have run into this, but the problem with estimation in technology, especially when you're building bespoke, completely net new experiences, is that your accuracy to provide these estimates drops off significantly beyond a million bucks. And usually, it's a ballpark number that we use. But what we try to indicate is that, if it's longer than six months... Like if it's a longer roadmap than six months, two quarters, planning and estimation completely goes to the dumpster.
It's all guesswork at that point. So you need to figure out how do you actually pursue getting some type of financial estimation in place, and some type of risk assessment? Knowing that you can't cross a certain threshold and retain accuracy. So I wanted to talk to you about greenfield products, and then talk to you about brownfield products, where it's a combination of old and new. So with greenfield, the way we look at this... And by the way, this is the easiest one to work with.
Well, you need four to six months of roadmap. It should be a shippable product. So you should have clear outcomes, and you should be aligned with the business that this thing can actually go out the door. And if you have that, you just go into Dual-track Scrum delivery with specific milestones, and then you release on those specific milestones. You can't have clear backlog, clear outcomes, and aligned group. What that may mean, is that you don't really know what the product should be.
And sometimes with large or innovative initiatives, where a business is launching a new line of service, you may not have that data. So you need a version of the product out the door before you can collect the data and then refine what your roadmap is going to look like. So, we recommend you need to go through that discovery and definition process that I talked about in the product feasibility section. And then you need to re-workshop again. Re-workshop, establish that four to six month roadmap, and go onto Dual-track Scrum, and then get it out the door.
Now, brownfield. Building on an existing base, dealing with existing technology that's probably outdated, and so on. It introduces an immense amount of risk. And risk management tactics for technical tactics are going to be different than the ones you use in greenfield products. When we start out a brownfield engagement, or we pick up legacy and then we have to either refactor, or build on top, or build next to, we look at two aspects.
One, do we have enough materials to actually understand what it is? So do we get it? Is there architecture documentation? Do we understand the logic? Is the logic understood by individuals, or is it baked in code behind, and no one actually knows how it works? And then the second part is, if you look at, say a legacy platform or a product that's older, does it actually work? And when I say, does it work, I mean from an engineering perspective. Does it work? Can you deploy a new feature, and the build process works? And you can stand up a new environment without this thing catching fire.
So you need those two pieces because those two pieces lower your risk dramatically. If you can't have one of these, then it's a complete a shitshow, and you have to take a completely different approach to deal with this. So, when we look at brownfield really, it's looking at these three categories of... I'd say, areas of risk that you need to assess. You need to look at the delivery process. CI/CD, do you have automation in place? You need to look at the architecture. Do you understand if there are components that may not scale? Or maybe there's just, like I said, business logic that's baked in. And then lastly, how mature are all of the development processes? And I'll talk about that in a second. How mature are you on, for example, performance monitoring? Code analysis? Things like that.
So with brownfield, we take a slightly different approach. And we say if it's not a really large scope, let's say it's less than half a million dollars to deal with whatever we're dealing. Maybe building features, maybe refactoring the component of a system it's on. If it's not, then we can do a bottom-up estimate. As I was showing in that accuracy chart, you can probably assess pretty accurately the amount of work that needs to happen within that type of constraint.
If it is larger than half a million dollars, it's probably a large scope. It's probably relatively impactful. There are probably many [inaudible 00:22:38], so you need to go through a discovery and definition process. And this is also a conditional statement, is this scope clear and owned by our team when we get through this process? Because what has happened for us in the past is, we worked on this compliance product and we realized that even the business, and the stakeholders on the business side, did not know how the product worked.
They thought they knew and they had some assumptions, but they were not in alignment with what the actual code was doing behind the scenes. So there was a bunch of business logic baked in behind the scenes. And when you go and you try to manage those risks, you just simply can't do it in a very well-defined way. So in those scenarios, what we do is we switch to a full-time and material model, almost like ongoing discovery. Use Dual-track Scrum with specific milestones, and use the build, measure, learn loop, and re-workshop along the way, to again, to de-scope and to simplify the problem.
My argument is that if you have a domain or complexity that is really, really high, and you're trying to manage that and plan for that, you're going to fail. So the target of any of these engagements is to break down the complexity into smaller chunks so that you can manage the risk. Because again, it's not that you have to do this. You don't have to de-risk it, but things are going to go sideways. So you're going to spend more money. You're going to be delayed, and people are going to be blamed for it. So you choose which way you want to proceed.
Yeah. In terms of the analysis, I mentioned a little bit of de-risk and technical stuff. Make sure that you have a variety of technical diagrams present, and both from a conceptual level, to logical and physical. Do you actually understand what the brownfield project is going to be doing, or what the existing system is doing? Are these documents recent? Are they available? What about all of the engineering best practices that are scattered across a variety of categories?
I mentioned this. Like security, are you actively looking at security? Does the current product support these things? Are you actively monitoring performance? Are you logging things that are happening in the platform because they will inform you whether there are risks that are heading your way. So again, if everything is fine, you just don't know what's wrong. You need to be actively scanning. You need to be using that defensive mechanism of scanning across these categories.
The last category is evergreen delivery. So this means, actually looking at how you're building product, and constantly refining not just the product itself, but also learning, monitoring, and improving the way that you build the product. So, the methodology that is being used. I don't have an explanation for this. I think this is a joke. But if I had a choice, and if I could get my hair styled, I would like to get it done by a cat, as opposed to doing human. I think that's a good value proposition.
The thing that I want you to think about when you think about evergreen delivery, is that there is no stable state. That doesn't exist. Things don't stay well-off. Let's just say that. There's always a degradation of experience. And so what we recommend and we actively do, is we keep track of not only the product backlog, but we also track a product debt backlog, as well as a technical debt backlog.
Not only that should you track them, but you should also actively estimate your debt. Now, think about that for a little bit. Just like you estimate your backlog, you should estimate your product and your technical debt backlogs.
Why is that important? Well, if you estimate, then you have a known quantity. You are aware what is happening with that particular debt. Well, I'll talk about this maybe a little bit later. But what you're doing effectively, is you're saying I have a certain amount of debt and I'm estimating it. So I know it's accumulating. And this could be story points that are being used there on the left.
And because I know that it's growing, I'm going to be also now allowing for some time within my active Sprint to actually resolve some of the debt. And this could be mix and match. You could maybe resolve some technical debt, and then also address some of the product debt. Some of the features that are absolutely necessary. So as long as you're doing this, you are in control of the debt. But again, remember the conditional statement I made is you need to estimate it. Because if you don't estimate it, then you don't know what's happening. And the reason, and most product teams run into this, is that this is the actual happening inside of the debt.
So you are actively resolving debt, but it continues to grow. And that's why I said it's incredibly important to estimate and reflect back on the accumulation of debt. Because otherwise, you may be rolling down the hill to a problem without monitoring and scanning for it. Another good practice is to do monitoring on escaped defects versus resolved defects. Now, it's inevitable that any product team is going to have some escaped defects. Some of it is going to get into production. But this should show you what is your velocity to resolve critical issues.
So can you actually actively resolve production issues immediately as they happen? And if you can, that's probably an indication of a healthy pipeline. You have all of those things orchestrated. Well, if you cannot, different story. But also here what I'm showing with the red arrow is, you may have trending. So this has positive trending, that less and less defects are escaping into production. Which is the type of trending you want to look at for products, not the other way around where more and more defects are escaping. Then, you can start looking at your testing strategy.
The other thing is you can look at internal defects captured and resolved. How should I say this? It could show the team's ability to solve their own problems. So if there are more and more defects being captured internally by our QA members of the team, and the team is resolving them, that's fine. But if you see the reverse trend, where there's more and more internal defects, that could mean that your engineering contributors are relying on QA too much to solve their problems.
So again, all of these insights allow you to monitor that health of your delivery. These are risks you want to keep your eyeballs on. What we also do, is we look at effort breakdown for any given story. So if you think about a story, we're implementing a login form, for example. And we look at how... Because we track our time down to 15 minute increments. We track everything that we do, and we categorize it. We can look at the breakdown of effort that's being made on a particular story, and not only look at it as a snapshot in time, but look at the delta through time.
And this is a real story that we've seen with one of our clients, we saw that technical design was taking up more and more time. And it was taking up to a point of a quarter of the time was being taken up by technical design. And we said, "Why is that happening?" So you can start diagnosing a problem before it becomes a real, real problem. Velocity. Take a look at velocity. Monitor that. What I'm trying to show here is that velocity can be gained, but you want to look for artifacts that help you again, avoid issues in the future.
So seeing an example like this with volatile team velocity, that may give you an insight that you need to look into why this is happening. Is it an estimation issue? Is it an over-committing issue? Stories completed over a period of time. Another really good report. So is the team more effective at getting features out the door over a period of time? So for example, if you were running an infrastructure project or a DevOps initiative, and you're trying to get better to improve your pipeline, to monitor, and de-risk your delivery. But if you're not seeing this type of positive trending in teams throughput? Then I don't know... That effort probably should be questioned.
And then the last one that I want to mention is, you should be looking at the time to market for any feature. So if a feature originates, the build, measure, learn loop. You measure, maybe you learn, maybe a feature originates at that point. You say, "It would be really useful to allow the user to cut their own hair." Well, how long before you can ship that into prod? And you should monitor that delta on the time as you mature the product. Because what you will see, and this happens all the time, is greenfield products have a really, really low time to market. More mature products will have that time increase. But if you keep your eyes on this, long-term you will prevent that technical debt accumulation, if you're actively monitoring these values.
So anyway, as I said before, if everything is fine, you just don't know what's wrong. Hopefully, some of the things that I showed you gave you ideas where you can go and do maybe some analysis, or dig into some of the areas in your own delivery process, and your product design, and so on. And actively scan. You got to scan across product feasibility because then you're going to be building great products. You want to make sure that the technical implementation is sound and maintainable, and then that your delivery cadence is predictable. You're a well-oiled machine where features go in, code goes out. And it's flawless. You're causing minimal defects, and so on.
So scan the perimeter for risks across those three categories. If that's all you take away from today's session, that's what I wanted you to remember. And then if you do that, you're going to have a wonderful hair day like these ladies from the '80s. Phenomenal.
Check out sourceryacademy.com. There's a copy of this book digitally there, so you can actually read it online. And then you can also look at devbridge.com. We're always hiring. Even during a COVID year, we grew 30%, and we plan on growing next year as well.
Thank you so much for being here. We're going to raffle off a book so you'll get a copy. Feel free to reach out to me directly. I love talking shop and talking product. Thank you for listening. Hope this was entertaining and informative. Signing off.
About the event:
Full Stack Friday is a monthly meet up hosted by Devbridge where we share insights on product development, design, and process. The talks focus on issues relevant to product people and full stack teams. For more information about the next event, contact us directly.