Data-informed product delivery
Analytics, metrics, big data, machine learning… We are in an era of more available information than ever in digital product delivery.
How do you find the right data, track it back to a meaningful outcome, and use that data to inform priority for product development and investment?
We’ll look at effective ways software products have leveraged data, the best time to incorporate it in your process, and how to focus on meaningful statistics to make the best decisions informed by your experience and institutional expertise.
Watch the talk:
Transcript:
Good morning. Thank you for coming out and joining us for Full Stack Friday at Devbridge. My name is Chris Wilkinson. I am the Director of Product Design here at Devbridge. And I'm excited that you're able to come out for our inaugural or Full Stack Friday. These meetups are designed to give you breakfast as well as some actionable things that you can take back and put into motion with your team. Today is about data-informed product delivery. So, it's giving information that you can trust at the right time for the best results. And data is a big subject, all of the different sources of it, qualities, way to store it. So, I thought we'd start with getting a little bit of perspective on the problem. That's, hold on, that's too much perspective. All right, this is a little better.
This is where our story begins, with a little bit of an anecdote about a time that I slept in this field. And just across the tree line, my new best friend, Ozzy Osbourne of Black Sabbath, also spent the night. And you might be asking yourself, "Chris, why are you sleeping in a field across a tree line from Ozzy?" Well, that's because we were all camped out for the eclipse a couple of years ago. And the eclipse is an event. Thanks to science, we know the exact time and place to be to see it. We have all of the right data. However, there some immediate factors, called the weather, that's a little bit more unpredictable. And so clouds came into play the day of the eclipse for many people in North America trying to view it. And we're right here, right in the viewing line.
However, we had some weather to come into contact with and figure out what to do, because if we missed it, our next shot would be in 2027, where weirdly enough, the city we were in is the center of an X above the eclipse's paths. Now, rather than waiting until the next time the eclipse comes around, we decided to use the tools at our disposal. And might I recommend bringing an amateur weather person with you anytime you go to see eclipse? Because this was huge for us. You see, we set some metrics with reliable data. The metric is, successfully see the eclipse with the data, the time and the place, to make the informed decisions, respond to the weather. And the day of, we wake up in the morning from our campsite, wave to Ozzy, obviously, and we're all hanging out and having our campfire coffee and we're looking at the weather forecast and it's looking rough.
So we go up the street to a park and we eat some biscuits and gravy out of a shack, which I highly recommend. If you're ever in rural downstate Illinois and there is a shack in a park selling biscuits and gravy, you absolutely eat it. It's fantastic every time, but we're sitting there, we're having a conversation because we have to actually change plans. The cloud cover is coming in, in such a way where if we don't give up our plans of reserve seating at a special spot with viewing glasses and everything covered, we think we're probably going to miss it. So luckily, we did make that call. We did get to see it. We ended up seeing it in the middle of a baseball field. For those that haven't had the opportunity to see an eclipse, I recommend making the trip at some point in your life to see one. It's truly breathtaking. These two photos were taken just minutes apart.
It's like sunset all around you, or sunrise all around you, all at the same time. You look up in the sky and here's the sun blocked out by the moon. It's just displaying these characteristics you could never see at any other moment than this very moment. Now, obviously this is a pretty big event. So we had another opportunity to showcase our ability to respond to real-time information when we totally ignored Google and Waze and chartered our own path back to Chicago. And because we did that, we made it back in just a little bit over five hours, whereas most people found themselves just sitting in traffic on the expressway for about 10, making their way from Carbondale back to Chicago in a way and other places north.
Now, here's the free little bit of data-informed decision making you can take. 2024 is your chance. If you're in the path of the next eclipse to see it, I recommend that you make the trip. Now, on to metrics. How does this all connect?
Well, all this stuff I was talking about about this trip and the decisions that we make and if you think about trips you've made, you probably feel the same way. You're thinking, "Well, Chris, this is stuff that I do naturally. This is all things that just make sense to me. Of course, I'm doing it." And you're right. We, as people, have an ability to take in information, derive patterns and make decisions very quickly. It's an evolutionary advantage. It builds on top of our instinct and the stories that we've been told across our lives. So why does that always play out in a software team? Because it's all people involved in building a digital product, as well. Let's talk about some ways to make sure that you're able to build on that momentum for your digital products that you're working on today.
First, let's talk about what makes a good metric. A good metric is specific. It is measurable and it is achievable. You don't want to set the goal where you can't get it. You don't set a goal you can't measure. And you don't want to set something too vague. Now, a great metric is something that is comparative to a current state or a future desired state, something that is easily understandable and doesn't require a half-hour lecture to understand why the metric matters and it is behavior changing. So when you have a comparative, understandable and behavior-changing metric, you are able to drive progress in a meaningful way, but it's not these things alone. You also have to understand the source of the data, where the data is coming from and its quality.
So what is the qualities of the data that you get? Any information you get can be considered to be qualitative or quantitative. Qualitative are things that you're ascribing a narrative to based on experience. Quantitative things are the hard numbers. Those things can be considered data that you're gathering either in an exploratory fashion where you're out discovering what's happening in the specific space, or they can be something that's reported out to you. A lot of the activities that we do as a software team can be chartered against these two things. So if you see here, we have a handful of different activities with user profiling. This is understanding user base, being the most quantitative and reported-driven item you can do all the way back out to focus groups, being the most qualitative and exploratory activity you can engage in.
The other ones up there, we have social mining or looking through social media, obviously, is going to be a very objective activity. But you can actually get into analytics and data mining and AB testing to bring more quantitative results to bear. It's important that you understand which activity you're using and why, and to understand whether or not you have the right quality of data behind it. The little pro tip as well here when we're talking about any kind of user testing, really five or seven, I'm in the five camp, also there's a seven camp, is the number that you need to have meaningful data. Anything more than that doesn't really have an impact until you're in the hundreds. So, if you have that small group, there really isn't a large investment that you have to make in order to get better information to make decisions from, better information to drive your product forward.
Now, the data's here, we have it. Well, let's understand really, is it the right kind of data? What I mean by that is it a vanity metric or is it something that we can actually use to drive action? Speaking of action, there's a recent film release in the action film saga of the Marvel cinematic universe, Captain Marvel. And I don't know if you heard, but before the movie even came out, before anyone had seen the film, there was a rating of it on Rotten Tomatoes based on how many people went to see it. A nefarious group of internet citizens that had nothing better to do, decided to download it, so it had only 23% saying they wanted to go see it. This is a historical low for a big blockbuster. News outlets started to pick it up and say that the cinematic universe from Marvel was doomed and the sky is falling. To their credit, they road the course. They knew the quality product they had made and the reputation they had built up resulting in a $456 million opening weekend, which is good for sixth best all time.
Now, if I'm the studio producing a film, I'm going to care a lot more about my opening weekend numbers than I am about a bunch of random people rating me on a website. That's a vanity metric versus the thing that's actually actionable. And you might be wondering, well, how actionable is $456 million? Well, it's enough money to buy 105,000 tons of tomatoes that you can allow to go rotten on your own if you want. That's pretty actionable. When you look at the 23%, you could say that's the leading indicator and the 456 million is lagging, it's after the fact. The event has happened. It's important to know where that metric is coming from and to understand how much of a predictor it is. The Rotten Tomatoes score was a leading indicator that might have said doom was coming. However, they partnered that data with their experience to say that things were going to be okay. And the lagging metric of box office open proved them to be right.
When you're thinking about how do I decide if something is the leading or lagging indicator? Well, you just look at something in a timeline. You have an event, any event. Things happen before the event. And then things happen after the event. What does this look like for software? Well, if you have a lot of help tickets coming into your platform and you notice that maybe even your total active users starts to drop off, then those two might be related. Your leading indicator of help tickets allowed you to predict there's potentially a lagging indicator coming up, idle users in real trouble and it gives you time to react. That's why it's so important to have both and to look at them in aggregate over time.
The next, is to look at the metrics that your team creates. It is the stable velocity of your team delivering product, which is a very, very good thing against the stories that you can deliver, which is also good, but delivered stories are completed. You can think about it this way. What are the engagement moments that are happening early and how are those items driving your overall conversion? When you look at this in context and you think about the total life cycle, you can actually understand this data in terms of how you fund a product, the engagement that you look to see and the conversions that are meaningful and then use those data points to determine the success of your efforts. Then drive that to reinvestment. See, when you actually pull all the data together, not only is your product better, but you can make better organizational decisions about how you maintain that product in your portfolio.
Now, another thing I need to point out here is the dangers of correlation versus causation. There is no better way to understand the impact of correlation and causation than through the lens of American movie star hero himself, Nick Cage. We're going to think about Nick Cage in terms of people who drown in a swimming pool. You see, if Nick Cage is in a movie, there is a correlation with number of people who drowned by falling into a pool. I want to thank the website, Spurious Correlations, for pairing this data together. If we really take this to heart, the message to take away is if Nick Cage is in four movies, 120 people will drown. So get him off the silver screens as quickly as possible. Now, if I try to make that argument, you're going to pretty quickly say, "Chris, hold on." And we understand it anyway, because those two things are so far apart naturally, we're really not going to buy it.
Our brain is going to do the thinking-through process, then we're going to go, "Probably not." And the same way that if you were to see me on the day of the eclipse, you'd probably infer that I was at the eclipse and not starting my own Beastie Boys cover band. It's not probably a good use of my skills either. See, you have to understand where the data is coming from and its qualities. Having quality data is important. Understanding the qualities or the attributes, the sources that make up your data, is equally important. I think too often teams will just take in information and then write the narrative that they want and then find the data to support their narrative. It's a much healthier behavior to look at the data you have, to agree on the data that's important and then to construct the narrative once you have the information at hand.
Now, I want to contextualize this a little bit in terms of software delivery. We did some work with a healthcare service provider where they needed to build a secure HIPAA compliant chat application, real-time communication across multiple devices and multiple locations. Really, the chat isn't the core piece here. The chat is a tool. Really, it's about how do you orchestrate 6,000+ people around a central task of staffing and managing an emergency room? Well, the HIPAA compliant chat application has to live in the day-to-day operations of the hospitals and the people that staff them.
So, what we did to get great data to understand the landscape, is we executed a service blueprint. For those that haven't engaged in a service blueprint before, what a service blueprint allows you to do is to understand all of the individual aspects that take place in a larger system, to weigh the impacts of those on people upstream and downstream from the product and to measure the overall impact. In other words, if you take a moment and really think about it, any digital product that you make exists in a larger whole, it exists outside of the product itself. It's important to take time to get that understanding.
Through building up that understanding, we were able to understand the pain points and measure those against the business, documenting every step, documenting the pain points, overlaying those with the business desires to identify an initial three-month MVP, which we agreed would be an appropriate amount of funding to give against key metrics as to whether or not a cloud-native, cross-platform chat application created the lift that they wanted to see in how shifts were being filled. So secure messaging is not enough. It's about what is the outcome that is generated. Going back to what great metrics are, they're comparative, they're understandable and they're behavior changing. So let's take a look at some.
Obviously, having 10,000 messages exchanged is a pretty impressive number in a week for a limited group of people. But really, the key metric that matters here is the responsiveness that you're getting from the healthcare providers and the core shifts that are being filled, the outcomes for business operations. We can get other cost efficiencies. The other metrics that are important to consider as well, but that's really a key arc to drive product decisions forward, track that adoption over time. Then once that lift is seen, identify additional phases of work, additional phases of releases that you can do to get a product to market in a meaningful way while you're regularly demonstrating value to those that aren't necessarily plugged in every day. You actually have the pieces that you need to drive a larger conversation, to drive a larger narrative with various stakeholders of the business.
From there, you can build out other interfaces or feature sets. Here, we focus then on administrative interfaces with the logistics side and eventually working through a multi-phase portfolio across four different products that all work together to support the underlying aspects of the business. That happened by underpinning the entire product effort with good data and with transparency and how it is reported. I think those are two key things that you need to do to be successful in communicating about a product that you're working on, is to clearly communicate the outcomes that you're doing in a consistent, regular way from the very beginning. That way people see that through line of value generated.
By integrating metrics into product funding, you can amplify your impact. If we look at the life cycle of product delivery, there's a discovery phase, a development phase and a measure and learn phase. This graphic that I have up here is an overview of the major milestones that we employ here at Devbridge to ensure success. You can also look at this through the lens of data. Understand that the measure and learn phase is not the data phase. Data is an underlying requirement of every step of the software process. You see, when you're learning, you want to find what's measurable and baseline the metrics.
From there, you can converge on what's possible and prioritize where you need to start. You bake in analytics throughout the development process. Then our data informed the development that you do in that you're actually leveraging metrics like velocity. You validate that data in the measurement phase and then loop it back in to prioritize where you go next. So you actually have the evaluation period and then intentional moment to understand what it means to collect the data that you've collected.
Now, really, really important in all of this is to treat data as a functional requirement of building great digital products. If you treat data as a non-functional requirement, you might find that it's not functioning for you. How do you do that? Well, it starts with the kickoff of any digital product. For us here at Devbridge, we have a workshop where we bring all the stakeholders together in a room for a day, sometimes even as short as an afternoon, to really converge on what's important and to understand how we can measure success. Our agenda usually looks something like this. We talk about goals. We talk about risks, internal, external. We talk about the roles in the system. We talk about user journeys. We talk about the priority. We identify ownership and stakeholders. And we identify any open questions that we may have to start delivery.
But the agenda feels like this, but really it's about the outcomes of these conversations. It's about the outcomes we want to generate by achieving the goals. It's about what mitigations we can put in place to minimize risk. It's about the functions individuals play in a larger effort, moreso than in their individual roles. The journey's important, yes, but then what are the dependencies on that journey? Any prioritization that we should do should come with an expected level of impact so we can start to benchmark any metrics circles that we might have. From there, there's clear assignment of responsibility and distribution of duties. Then the prioritization of questions allows us to have a clear place to begin discovery. The reality though, is data, data everywhere. So how do you know what to focus on?
Your team's generating data nonstop, points versus days. How should we measure our progress? Velocity versus volatility. I'm seeing some kind of faces when I say volatility. I'm not sure how many of you have used it before, but volatility is a measure of the predictability of a team's velocity over time. In other words, velocity is not meant to be a measure of individual team success. It's a measure of predictability. And a stable velocity is more valuable than a velocity that is occasionally very high and occasionally very low. That's very volatile. So if you measure the volatility, the running average of your velocities, you have an understanding of how predictable your team is. Now, volatility becomes a very healthy lagging metric that allows you to make leading metric-led decisions based on how predictable your team is.
Are you going to hit the date? Are you going to stay on budget? Obviously, those are considerations that you need to have. And there's the decisions that shouldn't be made in a vacuum, because who hasn't seen something like this? Where you have a high velocity team, but there's also a growing backlog of requirements that seem to keep coming in and a fixed date. How do you manage this? Well, it's all about transparency. It's about the transparency in the data and it also means transparency in the data about what the team's doing. For that reason, we developed a mobile application called PowerUp, which combines financial information, team reporting and JIRA stories in a single view that allows any project stakeholder, any external stakeholder, the enterprise CIO, CEO, to see what's happening with the team and drill down all the way into the specifics of an individual story and stuff, if they want.
What this really does is this actually provides for the team a level of trust and visibility to what they're doing, removes the guesswork and it takes the agreed-upon metrics for success, and puts them front and center in the conversation. This way, you're executing across the delivery effort involving data every step along the way, rather than trying to sprinkle it in at the end. When you do this, you can take the data and drive your funding process all the way through delivery to a healthy reinvestment, through the metrics that you report out and share, and the regular decisions that you make together driven by that data. You are data-informed. It's about experiences versus good data. Wait, hold on! No, no, no, no. It's about experiences and good data.
See, the data that you have is very, very valuable. But your experiences can't be ignored. Going back to the anecdote about the eclipse, we took our experiences living in the Midwest to drive our decisions the day up. And that feels very natural. That anecdote makes sense. The same has to be true for the way that we make software decisions. We can't become so data-obsessed that we ignore our experiences to deliver a good product. Thank you. Thank you.
About the event:
Full Stack Friday is a monthly, meet up hosted by Devbridge at our Chicago headquarters. We get together to indulge in a delicious pancake breakfast and talk shop. We bring in guest speakers to present talks focused on issues relevant to product people and full stack teams. For more information about the next event, contact us directly.