Product-centric funding

How to configure the software development funding process to incentivize measurable product outcomes

Download white paper

Inherent risks of classic funding techniques

The software industry doesn’t have the best reputation when it comes to delivering on value promised. It’s a familiar tale. A business sets out to build a product. They turn to a supplier for support. Promises are made between the two organizations, but somehow the outcome doesn’t line up with the original intent and things cost twice as much as expected.

According to McKinsey, “On average, large projects run 45 percent over budget and 7 percent over time, while delivering 56 percent less value than predicted.” Organizational culture, engineering, maturity, and outdated processes all have ample research pertaining to delivery failure. Taking this a level deeper, how does enterprise funding predestine a software project to fail?

Identifying typical triggers that lower value allows us to establish best practices that help the enterprise CIO, CPO, and CTO deliver more ROI per dollar spent. As it turns out, the required shifts for funding are mostly behavioral, retaining the expected level of accountability and risk management.

waterfall process CF

The BRD fails

A large international bank sets off to build a custom web banking experience. A group of analysts gathers requirements for six months to identify the features necessary to achieve feature parity (compared to current, rigid, outdated off-the-shelf product), and new features that the business perceives as necessary. The requirements get packaged into a business requirements document (BRD), which is reviewed and approved by the organization’s senior leadership. Then, funds are allocated to the initiative.

funding failure PCF

Everyone is slightly anxious, as this is a significant dollar amount and a single opportunity to get the project completed correctly. While trying to try to manage risk and avoid adverse outcomes, the business works to illustrate all identified features and workflows with wireframes and design. These steps help the business uncover features missed by the analysts. The design project takes six months and results in a library of four hundred screens.

Due to the complexity introduced in the design phase, the engineers—who were not initially consulted—build a large, padded estimate. No one wants to catch the bullet should this build go sideways. After reaching an agreement, the engineering team starts building the project three months later. Halfway through development, the engineering team identifies a few requirements from the design wireframes. To get clarity, they ask the business to review the work completed to date. When reviewing the new workflows, the business team quickly realizes the product is not as intuitive as originally proposed by the design agency. Development is put on hold while designers take another couple of months to revise the four hundred screens based on the new direction. In the meantime, some front-end refactoring takes place. Finally, the development team resumes their work, and after another few months, the project is finally ready to go into User Acceptance Testing (UAT).

By now, the budget is depleted from unplanned revisions and necessary refactoring. UAT surfaces additional challenges with several of the services being held during development, further delaying the launch by another few months. QA costs continue to rise. Timelines continue to slip. Once the product finally launches, the analytics indicate that customers seldom use the expensive new features.

This is a true story, a real-world example of where the Devbridge process methodology was intentionally overthrown to comply with a risk-averse culture and lack of trust. A retrospective revealed that the 2.5-year project could have been completed in under twelve months using modern, iterative delivery methods.

Continue to:Enterprise organizational structures impede product design