The cross-functional agile team playbook

How to avoid waterfall mishaps and build successful product teams

Download white paper

Learn from failure

Imagine a new product being planned for delivery with an investment value of over $20 million. The target launch is two years out, and the complexity is relatively high—the product is something the company has not built before. As expected, the business requires a guarantee. In this scenario, the guarantee takes shape as an elaborate set of documents, mockups, and schedules that everyone reviews and approves. “This is exactly what we need,” the stakeholders say. Risk mitigation is front and center, with one attempt to get everything right.

The Agile Manifesto explicitly states the value of working software over comprehensive documentation, but that’s hardly going to help a siloed organizational structure where IT and business don’t see eye to eye. The business is interested in seeing the value generated matching or exceeding the dollars spent on the product. Software that only delivers 75 percent of the promised value at 100 percent of the cost is a problem; 50 percent of the value at 200 percent is a catastrophe. These numbers are not rare in the industry. IT, on the other hand, is given orders and needs to meet the specification provided by the business. Here’s where the waterfall process breaks down.

Failure 1: Siloed organizational structure

Organizational structure CFT

The company defines requirements and budget; IT handles the implementation. The design (or User Experience) team is often an isolated department, injected between business and IT. When expectations are not met by IT, safeguards are erected by the business, and trust erodes.

The business plans new digital products based on assumptions and annual funding cycles, locking in requirements ahead of time. IT is required to deliver on fixed requirements in isolation without involving the end user in the process. Delivery timelines of more than a year are common with minimum learning along the way. Risk continues to mount as teams work in isolation based on a lot of assumptions made during the research phase of the project.

The design team is tasked with creating wireframes and the UI of the product, resulting in an artifact with close to four hundred interactive screens. Design decisions made upfront become detrimental to success as they cannot be tested or validated with users. Once the design is complete, the team is deallocated, and the funds for this phase are exhausted. There is no room to go back and refine.

A siloed organizational structure implies a conflict of interest between IT and the business. It shapes an environment where cooperation and trust simply can’t exist.

IT TEAM CFT

Another side effect is that design decisions made without consultation with engineering may have high architectural costs—something that could have been completely avoided through a collaborative workflow. The same can be said for quality assurance (QA). An external QA team often focuses on a costly brute force testing strategy instead of becoming a strategic partner in the cross-functional team and designing a defect-prevention strategy (working on the source of the problem, not the symptoms). Delayed releases and mounting costs continue to erode trust between IT and business, resulting in a combative climate that makes everyone’s work harder than it should be.

Recommended approach

  • Pilot a cross-functional team. Include the core disciplines of product management, product design, and software engineering (e.g., testing, development) with support from DevOps, security, and compliance. Empower the team to share the responsibility of owning the results, isolating them from the archaic structure outside, and using the model to showcase results to the broader organization. Success is celebrated. Soon other employees and teams want to convert to a cross-functional model. Consider using an external partner such as Devbridge to pilot the transformation as it is much easier to adapt existing working processes than to build from scratch.

  • Transform to a structure that folds all delivery disciplines into a single organization. Create multiple small, nimble teams within the structure. The team’s direction is set by a Product Owner who reports to a Product Manager. The budget and the outcomes are owned by the Chief Product Officer. Allow the team to own the business outcomes versus the requirements. Smart people can make decisions on their own.

  • Establish clear business outcomes. Share the targets with all product teams. Remember the Agile Manifesto, valuing working software over documentation. Outcomes should be specific, measurable, and achievable. Use evidence of good work, such as sprint demos, to incentivize sponsors and to endorse the alternative funding and delivery model.

  • Build a communication and change management plan. Center the messaging and strategy around the success of the new model. Adoption is based 50 percent on skill set and 50 percent on perception management.

Failure 2: Detailed requirements over team collaboration

Most unmet expectations result from a lack of trust early on. Requirements are collected by an analyst and locked in before the project kicks off to reassure business sponsors. Clearly defining the work upfront helps sponsors understand what to expect and ensure the final product delivers on the expectations. While this appears to be good, the expectation is often biased stemming from a party of one, short-sited, and disconnected from what the business actually needs.

The requirements lack full context as user research and user testing are not part of the waterfall workflow.

Throughout delivery, developers attempt to align the product to the requirements documentation, even under circumstances where the needs change. In such instances, change requests are issued and signed, rework is completed, the budget increases continuously, and the timeline continues to slip—the exact opposite outcome from the original intent of reducing risk through documentation. Once the product enters testing at the end of the project, all sorts of nasty bugs, issues, and problems arise. When the stakeholders finally see the product for the first time, it’s not what they thought they would get.

avoid agilefall CFT

The question is, why didn’t the team deliver on expectations—even if original requirements were met?

Requirements implying design: Specifications often dictate solution design, yet not a single individual from the product team such as a product designer, product manager, or engineer has any input.

Complexity understanding: Large, complex systems are challenging (and pointless) to document thoroughly. A burdensome quantity of documentation creates an environment where the product team has no easy way to assimilate the information into the specifications.

Paralysis through analysis: The project experiences significant delays as the analyst continues to discover edge cases. It’s a never-ending story that quickly spirals out of control. The deeper the dive into defining the solution, the more that’s discovered with more elements needing to be captured and maintained. This is why the software industry has moved away from hard documentation and into “living requirements” through self-documenting code, user stories in Jira, and knowledge bases in Confluence.

Biased data: Throughout the collection of requirements, needs are heavily biased toward the party being interviewed. For example, in a system where there are internal users as well as external customers, assumptions are made daily about “what the customer needs,” yet customers are never consulted.

Recommended approach

  • Run a Lean Requirements workshop. Use the workshop to build a shared understanding of the problem and the desired outcome. The session is run by the cross-functional team with up to 7 stakeholders. Facilitate service design, story mapping, system roles, and user workflow exercises. The activities should take 2 days to create the initial roadmap—less time than the weeks needed to craft a biased requirement document. Create enough requirements to a) understand the holistic scale of the effort for investment and b) start building small, incremental pieces of the software to validate assumptions.

  • Leverage research and discovery efforts. Try these tactics if alignment and clear understanding and cannot be reached during a workshop. Lack of clarity may be an indication that the business is aware of the problem, but not aligned on what the opportunity is or how to best remedy the issue.

  • Align enterprise architecture best practices. Don’t get bogged down by a mandate to use a specific stack. Modern, containerized software architecture allows various stacks to commingle. Plan ahead by agreeing on an acceptable amount of technical debt for the product. Technical debt happens. Agreement on a reasonable amount of debt is important, especially in the early stages of the build where overinvestment in the industrialization of software may be counterproductive.

  • Define a product roadmap. Plan to build iteratively, working towards an initial release in three to four months. Release as often as possible to internal teams, friends and family, and any other pilot groups to validate real-world scenarios. Don’t just rely on stakeholder feedback during sprint demos. Refine and re-estimate the backlog. Then adjust the release strategy based on collected data. Consult with the business about a sufficient launch to market timetable as ambiguity decreases with each additional sprint.

  • Demonstrate value through working software and user acceptance. Prove the product works well. Demo and share metrics. Invest in building trust and confidence. Use the currency that is exchanged between the team and the business to build stronger relationships.

Failure 3: Non-iterative process

Project managers operating in waterfall do not have control over the definition of the deliverable—that was the job of the analysts who wrote the requirements. Even though users may have been consulted during requirement gathering, further exposure is delayed until after the product launch. While building the product, the team lacks the insight to measure effectiveness or learn along the way. Once the vector is set, the direction does not change without major refactoring and reinvestment.

Learning and changing comes too late and costs too much to address in waterfall projects.

When a working product is only revealed at the end of the project, users and stakeholders have no way to provide meaningful feedback until it is too late. Without user involvement, the client-side struggles to commit to a system and can even be hostile, resisting the change. Even when all members of the team have the best intentions, classic project management results in a lot of paranoid finger-pointing, frustrations from exceeded budgets, and disappointment from underdelivered value.

By the time an initial version of the product ships, funds and time have been exhausted. Learning acquired during the launch requires additional funding. There is no way to build iteratively, test with the target audience, and refine the product along the way.

Recommended approach

  • Set hands-on acceptance criteria with stakeholders. An important way we’ve extended the scrum demo is by mandating hands-on acceptance of delivered features by project stakeholders. The wider team has five business days after the end of the sprint to play with delivered functionality in a staging environment and provide feedback based on learnings. This approach improves stakeholder engagement, builds alignment, and facilitates long-term buy into the new process.

  • Run analytics from day one. This is a great philosophy to follow on any new product build. Implement an analytics framework in the product from the first sprint. While KPIs vary from product to product, analytics suites such as MixPanel and Adobe Analytics measure the necessary functionality. Being armed with data upfront guarantees that any meaningful functionality becomes a candidate for tracking as the team plans the sprint. The availability of the analytics hooks within the project enables each engineer to integrate easily.

  • Test with real data and users. Agile forces teams to ship working, granular blocks of software. For example, the ability to assign beneficiaries to a policy could be an isolated component delivered within a single sprint. Consider testing each component using anonymized data that is close to production in terms of volume and type. This approach identifies design shortcomings and potential performance bottlenecks.

  • Establish, track, and report on performance. We recommend establishing baseline performance standards during pre-production environments and project states. Some performance scenarios can be imitated with load tests; others require a staged production rollout to property asses. Having performance goals in mind leads the team to design the application with performance and scalability in mind.

Continue to:Cross-functional teams required