Documented Failure: Why Detailed Requirements Cost Twice as Much and Deliver Half the Value
Editor's Note: This post was recently expanded on in a recent article and video around Building a Culture of Innovation.
We often go through contracting phases for custom product builds that are estimated at over two million dollars. As expected, we are asked to draft a fixed bid SOW that documents, at a high level of fidelity, deliverables for each sprint of the project. I couldn’t come up with a worse idea if I tried and have thus dubbed this as a well documented contract for catastrophic failure.
And even though the agile manifesto explicitly states "working software over comprehensive documentation," I always try to understand why a specific pattern or behavior exists in the industry. Purchasing custom software is akin to buying a cat in a bag, I suppose. Personally, I think any furry companion is better than the alternative, but procurement departments have the obligation to protect the interests of the company and make sure the value generated matches or exceeds the dollars spent for the product. Software that only delivers 75% of promised value at 100% of cost is a problem. 50% of value at 200% is a catastrophe, yet those numbers are not that rare to see in the industry.
We Don’t Trust You, Sign Here
So it’s an issue of trust and hedging risk. In any given agreement two parties are coming together and each is looking out for their best interests: a buyer must minimize their risk of failed delivery by asking the vendor to guarantee a fixed, agreed-upon scope and the vendor needs to make sure that value is delivered within the authorized frame of cost and schedule, while staying profitable along the way. Both parties face financial penalties upon failure, however the buyer, it may seem, has little to no control over the success of delivery. Naturally, the tendency in this instance is for the buyer to transfer more of the risk on the vendor: “you guys are the experts at doing this, right? You should feel confident in your estimate and commit to the not to exceed”. We’ll come back to this.
The initial attempt to ensure software is of actual value to business has alway been to document absolutely everything. Business analysts work with project stakeholders to define in meticulous detail the exact needs of each individual department, user, ad infinitum. A lot of sweat is spilled performing this less-than-glamorours task over a period of three, six, or sometimes even twelve months. This documentation is either produced by the vendor or shared with a group of vendors as part of estimation and contract negotiations and then baked into the statement of work.
Important paradox to keep in mind: the documentation effort to date has produced negative value for the business, since there’s a cost involved and the business is in no way in a better position to solve the challenges at hand. Actually, there is no solid proof to show that documentation reduces risk or increases learning - the opposite of a rapid prototype.
Throughout delivery the custom software attempts to align as much as possible to the documentation created, even under circumstances where day-to-day needs change. Change requests are issued, signed, rework is completed, and the budget keeps increasing and timelines slipping - exact opposite outcome from the original intent of reducing risk. Once the product enters testing (also performed at the end of the project) all types of nasty bugs, issues, and problems start rising to the surface. When the stakeholders actually see it for the first time - it’s not what they thought they would get. Question is, why?
Diminishing Returns from Over-documented Requirements
Most of the unmet expectations are related to the approach taken early in the project due to lack of trust. Let’s take a look at some of them and then dive into best practices:
- Requirements gathering by an analyst ultimately implies design. In other words, specification often dictates solution design, yet not a single individual from the product team (such as UX, design, engineering, architecture, devops, QA) have any input along the way.
- Large, complex systems are challenging (and pointless) to document thoroughly. Pure quantity of documentation creates an environment where the product team has no easy way to assimilate all of the knowledge from the specification. The law of diminishing returns applies well. I have a hard time reading one good book a month, and you expect the product team to digest a 200 page spec that captures detailed business logic? You can hear the engineers snore all the way from Lithuania.
- Project incurs very significant delays due to paralysis through analysis. It’s a never ending story - the deeper you dive into definition, the more you discover, the more you need to capture, the more you need to maintain, and so on. There’s a reason the software industry moved away from hard documentation and into “living requirements” through user stories in JIRA and knowledge bases in Confluence, for example.
- Throughout collection of requirements the needs are heavily biased based on the party being interviewed and then only partially interpreted when they’re consumed by the product team. For example, in a system where there are internal users as well as external customers, assumptions are made on a daily basis about “what the customer needs”, yet customers are never consulted.
The resulting software costs twice as much and delivers half the value. Iterative delivery alone is only partially useful to mediate some of the risks introduced through over-documentation, but I’ve found that requirements altogether need to be approach in a completely different way.
We started with lack of trust, so let’s get that resolved first and foremost. To establish trust, then, the product team needs to demonstrate a) the understanding of domain, b} value for business (quickly and inexpensively, preferably), and c) measured delivery cadence that would help predict overall product investment. Here are some of the vehicles to build trust and delivery cadence:
- Using lean requirements and rapid prototyping a cross-functional team can build a proof of concept in a period that ranges from a couple of weeks to a couple of months. Even on the larger scale, this would count for only 5% of total budget (if budget is two million dollars as in my example above), yet mediate some of the more pervasive risks.
- Using workshops to demonstrate how alignment through shared understanding builds trust.
- The prototype can be tested against stakeholders, customers, users, and other personas that may be impacted by the release of the planned product. This allows the business to justify the investment - something that documentation can not do.
- Feedback collected helps manage bias, uncover risk areas, flesh out and prioritize functionality for the build phase, and overall align the team on the objective.
- Achieve delivery cadence by using agile metrics to increase transparency and identify delivery issues early and often.
With minimal investment and a fraction of the time that would have been necessary for old-school requirements, the product team demonstrates domain knowledge, tangible and measurable value for business, as well as delivery velocity that can be used to further narrow the project delivery cost and schedule.
Just In Time Requirements
Now that we have a prototype that the business aligns behind, we can start shipping incremental features by using Dual-track Scrum. A flavor of Agile, Dual-track allows discovery and requirements definition to happen in parallel to delivery. This is important. By following this process we can incrementally demystify the large and the complex, allow the product team to make design decisions along the way, as well as validate assumptions through user testing with each sprint demo. Best practices in agile teach that requirements should say “what” the business needs and the delivery team should make decisions around “how” the needs are going to be met. This fosters innovation as well as the ability for the team to come up with creative solutions to business problems.
So the buyer is receiving functional software every two weeks. The buyer is also going through an acceptance process with each delivery. Remember how I mentioned at the beginning of the article how, it seemed, that the buyer had no control over the success of delivery? Quite the opposite, actually.
Role of Acceptance and Scope Management
Acceptance on a per-sprint basis requires stakeholders to be actively involved. Great software is not built in a vacuum. No one knows your domain and your customers as well as you do. Participating in a demo day and signing off on the functionality as “complete” requires folks to be engaged. It also enables stakeholders to be in control.
Collectively managing priorities and spend on a per sprint basis puts pressure on all parties - both the buyer and the vendor. The vendors inclination is to ship as much business-value-generating features as possible within the established budget. The buyers inclination - maximize return on investment by making each feature perfect. If we start adding too much chrome, however, we will run out of budget for other features in the backlog and so the delicate balancing act ensues. We call this process scope trade-offs - in order to add new scope, something else needs to be descoped from the backlog.
Building a Hog on Budget
Do you remember a popular TV show about West Coast Choppers? These guys built some of the most incredible custom motorcycles in the world. Building products, or, specifically, building custom software is… custom, much like these bikes. You can build an effective transportation vehicle for ten thousand dollars or you could mold the demigod of all hogs for a solid quarter million. Choice is yours, but the fundamental business goal of getting from point A to point B will be serviced by both machines.
The metric commonly used in agile to measure the speed at which a team is able to ship incremental functionality is called velocity. Velocity varies somewhat from sprint to sprint, but ultimately allows us to project total cost of delivery based on remaining items in the backlog. For example, a team delivering 20 story points in a sprint will complete a 100 point project in five sprints.
The second metric is burn rate. The burn rate tells us what the cost of retaining the team per sprint is and the amount ranges based on team size (or squad size on larger projects).
By using velocity and burn rate we can very quickly represent the health status of the project. What is even better, risk is managed on a per-sprint basis. Let’s pretend, for a minute, that the buyer selects an inexperienced vendor to launch their product to market. The vendor will over-promise and under-deliver, but you, as the buyer, will be able to tell within the first couple of sprints! Good luck trying to detect failure in waterfall delivery. I remember an Oracle implementation vendor that, in each status meeting would tell our client that they were at 75% complete with the project. A year later and over a million dollars down the pipes they were terminated without completing the project.
What I’m getting at is that just-in-time requirements and agile are very transparent processes built on trust. Both the vendor and the buyer are mutually held accountable by functional demo’s, sprint acceptance, velocity reports, and so on.
Small Wins vs. a Big Flop
Organizations have a variety of approaches to structure roadmap delivery these days. One common approach is to launch a Minimal Viable Product (MVP) - software that delivers business value, but sacrifices a lot of the luxuries and chrome of a fully developed product. The obvious benefits include a shorter timeline for go-to-market, market input without over-engineering unnecessary features, as well as a more advantageous funding strategy - prove value, then get more money.
When it comes to innovation - small, incremental wins are much more impactful than a big bang. An enterprise that responds to their customers as they launch new features every two weeks is much better off than one that launches a feature rich, expensive platform… that ends up delivering a fraction of originally predicted value for twice the budget.
I’ve learned that things change while we build software. It’s unavoidable. Actually, it should be expected and welcomed. The industry, the customer needs, available technologies, and even your business process. Should we plow ahead, heads down and blinders on, with the requirement decisions that were made over a year ago or react to this change? And if we rise to the occasion of building software that exceeds project business value, perhaps you shouldn’t invest so much energy and money into building detailed requirements?