By:Aurimas Adomavicius Posted On: Topic:Strategy

Five steps for Agile transformation in financial services

Evidence indicates that agile software delivery results in a higher success rate for software projects as compared to waterfall delivery. The product (I’m going to substitute Product instead of Project going forward, the agile way) is of higher quality, Quality Assurance costs go down, and more scope can be shipped to market in a shorter period of time. In fact, we’ve seen a tenfold decrease in costs for delivery of identical software products for tier 1 banks when an agile approach and toolset were used. Why do some companies insist on the slower waterfall process, then?

Success of agile vs. waterfall

Blame the industry. Highly regulated industries such as financial services, healthcare, and electric power all struggle with adoption due to the burden of regulatory compliance. Processes established years ago still govern, even with the software delivery landscape changing dramatically. What we’ve come to discover, however, is that these organizations can get eighty percent of the value from Agile while changing only a fraction of their processes and practices, without, in any way, compromising on regulatory requirements. In fact, quite the opposite. 

To help you navigate around this white paper, take a look at the list of topics it covers below:

Let’s start with biting off a smaller piece of the apple to help with our new, agile way of chewing.

Step 1: Reduce project (product) size. 

All large transformations start with small success stories. Starting on a large, complex project as a trial run for agile, a process that is new to your organization, is probably not the best idea.

A divide and conquer approach when introducing Agile to the enterprise is preferred. We select an initiative from the product portfolio and distill it to the smallest functional modules - allowing for the team to iterate on sub-components of the overall product. This gives the opportunity for the cross-functional team to familiarize themselves with the workflow, experience the rhythm of agile rituals, and build confidence in each other.

Selecting a smaller project to start

The team is encouraged to fail. Failing on a small piece of delivery, a sprint, endorses collaboration, learning from mistakes, as well as limits the financial impact of failure since the scope is not significant. Long term the team grows to understand that failure is a natural part of software delivery and that the real indication of success is shipping the right product for the users. In fact, embracing failure also aids organizations in adopting a more innovative mindset - not all ideas are meant to succeed, but you’re more likely to hit a jackpot if you allow your teams to experiment.

But let’s get back to project sizing. What’s too small and what’s too large for getting started with iterative delivery? In short - anything less than 1500 man-hours is probably too small, and anything over 5000 is risky … but here’s how we get to those numbers.

To begin with - we need a team. A team implies more than a couple of engineers, so let’s break down a typical, small delivery team:

Let’s start with the Product Manager - the individual responsible for building the right software for the users. The PM represents the business and establishes the minimal viable scope for the product to meet the customers’ needs. In classic waterfall delivery a subset of product managers responsibilities are often handled by business analysts. The involvement of the Product Manager in actual day-to-day delivery is limited, so let’s say they will have a 25% utilization of total productive weekly hours (in other words, the product manager will spend roughly 8 productive hours helping the team define what they’re building on one product). By the way - these hours vary based on size and complexity of product managed, but that is outside the scope of this article. 

The Team Lead - responsible for distributing work across the team, grooming the backlog with the PM, and establishing high level technical architecture. Let’s say that our team lead is fully allocated to this project and can spend 35 productive hours a week on this initiative. Some agile teams have dedicated Scrum masters, we’ve split this role between the Team Lead and the Product Manager.

We may also need a designer, a quality assurance analyst (because we want to automate our testing throughout delivery), and an additional engineer for delivery bandwidth. By the time we’re finished building our cross-functional team we are looking at roughly four FTE’s (full-time employees). There are more members in the team, but some are only partially allocated and/or shared across several projects. A team of four burns approximately 140 hours a week, or 390 hours a sprint (2 weeks times 35 times 4). 

Cross-functional team

The benefits of agile surface when we can iterate over the product - building small functional pieces of the overall deliverable, testing with customers, and revising as we go. Let’s give the team some room to fail and improve - setting our overall project length at six sprints. A bit of simple math and we can safely say that the smallest project that we can attempt to do with Agile is around 2,340 man-hours. I don’t want to get into estimation methodology just yet, but it’s fine to start with the estimate your teams prepares for waterfall delivery and then attempt to ship it using agile.

By the way - you’re going to run into a lot of naysayers who will tell you that all of the projects in the enterprise are multi-million dollar affairs. That’s simply not true. Dig deep enough and each product, each technical implementation can be distilled into sub-components that are perfect fits for iterative delivery. We share how to blend waterfall delivery with agile when separate teams are using different methodologies in this article.

Ok, we’re done with step 1 - selecting a small project to start your agile adoption. onwards to step 2.

Step 2: Use Lean requirements to jump-start delivery

Let me know if this sounds familiar. A business analyst is assigned to the project and proceeds to create an extensive library of assets that includes the application design document (in Microsoft word), the design of web services, interfaces, and data architecture, a workflow schema, a user interface document, a high level workflow document, and a detailed user story that binds all of these assets together. This takes up to six months for a small to mid size enterprise project. No business value has been created in these six months because the customer is in no way closer to a functional product.

Time to value - waterfall

And yet the market doesn’t sleep. Requirements start changing prior to the documentation being finished. Word and excel documents are static - they do not provide support for versioning, change history and thus become hard to maintain and keep up to date. Once finished, they’re more than intimidating: detailed charts running for hundreds of pages, not at all accessible or simple to digest for the team. Lastly, all technical and UI design decisions within the requirements documentation have been to-date made by a solo analyst that operates without the benefit of a cross-functional team or the knowledge they bring to the table. 

So six months, little value, already outdated. Time to try lean requirements. Without diving into too much history you should know that the objective of Lean is to maximize customer value while minimizing waste. It goes a way back, Toyota being the pioneer in the automative space for transforming what Ford was doing with Model T in a new, flexible framework.

Let’s focus on a couple of rather simple tools within the lean framework that are irreplaceable in our requirements toolkit when building new (or replacing old) software products.

Time to value - agile

Unlike classic requirement gathering, we start lean workshops as a group. Our primary objective is to arrive at a shared understanding of the deliverable as a team, while including stakeholders and customer needs in the exercise. During the workshop that can be as short as four hours to as long as three days we establish personas, perform a story mapping exercise, draw user flow diagrams, and go through basic prioritization exercises for the captured scope. Let’s look at each in detail.

User personas capture the nuances of users that will benefit from the software we’re building. We keep referencing our personas as we’re adding functionality to the story map and think of how said functionality impacts their routines. Personas capture more than just functional responsibilities of the user type, but you can read a more in depth article on the exercise here.

Persona exercise

Story mapping is a collaborative process of capturing product requirements from the cross functional team in the workshop. The approach is simple - there are no bad ideas, and your idea must fit on a post-it note. Everyone in the room takes time writing down their thoughts and then adding them to a whiteboard. The Product Manager, in the meantime, starts eliminating duplicates and grouping ideas into Epics - larger blocks of product functionality. Here are a couple of videos of how telling stories helps define requirements.

Once a story map has been established the team can collaboratively evaluate the complexity of individual stories. Since each story is relatively simple (otherwise it can’t fit on a post it note), t-shirt size estimation is used to evaluate overall impact of each. Is Facebook sign in a small, medium, or extra large story? Stakeholders can also debate the sequencing of stories, influencing the delivery schedule and building a Release Schedule and Product Roadmap.

Storymap exercise

As you can already tell - lean requirements are accessible, create a shared understanding, and allow all members of the cross-functional team to contribute their ideas for the product (a team of ten is better than doing it solo). It’s also low cost, takes only a couple of days, and provides just enough information for the team to start moving and shipping functional software. A side effect of involving sr. executives in workshops is that alignment and buy-in happens through participation. You no longer have a swoop-and-poop project derailment half-way through the project because your sponsor has committed to the scope and schedule in the workshop. 

The beauty of agile, or specifically dual-track Scrum is that requirements are created just-in-time for delivery of the following sprint. The definition of acceptance criteria on each story is specified during backlog grooming and lives in Jira - a product management tool. Each time requirements or documentation changes it can be easily updated in a centralized, living repository that everyone on the team has access to.

User story acceptance criteria

To summarize, lean requirements help us ship working software over comprehensive documentation, which is one of the cornerstones of the agile manifesto. Now that we know what we’re building, let’s continue to how we’re going to easily collaborate across a distributed, cross-functional team.

Further reading, Documented Failure: Why Detailed Requirements Cost Twice as Much and Deliver Half the Value.

Step 3: Upgrade your tools to coordinate communication 

Worst case scenario: your project is managed in Microsoft Project, your scope is captured in static word and excel documents, your source code is stored in an instance of TFS, and your quality assurance team logs defects in an HP enterprise quality center. Great. An ecosystem that provides zero transparency, no integration, and disconnected communication between teams. No wonder projects cost twice as much and deliver half the value. Forget all that.

Take a look at Jira - a tool created by folks at Atlassian. Jira is the gold standard in the industry of software product delivery and helps all members of the team stay connected and up to date. There are other tools in the industry, but none have the maturity and breadth of functionality offered by the Atlassian family of products (hipchat, confluence, etc). 

Jira backlog

You can get started by licensing a cloud instance of Jira if you don’t want to get your infrastructure folks involved and that’s a great way to move lean. Once you’re more comfortable and need a more granular, configurable instance you can opt for their enterprise package and set up a dedicated application within your own datacenter. 

But let’s back up for a moment. I don’t care what tools you use. Our objective is to promote individual communication and interaction - another cornerstone of the agile manifesto. Jira works because it gets out of its own way and allows teams to have meaningful conversations because they are not burdened by busywork, documentation, and disconnected sources of truth (we call this excel juggling - it’s quite the art in the enterprise).

Ideally, the team will use Jira, or another tool, to manage your backlog, define user stories, capture acceptance criteria, upload and attach visual design assets, collaborate through comments, and monitor dependencies and blockers. Added benefits include a living audit trail of all activity within the project (e.g. how did the acceptance criteria change through time on this user story?), in-depth reporting on delivery, roadmap planning, and more.

Jira reporting

Hold up. "Our Enterprise Deliver Framework team has told us that our excel documents are necessary to stay compliant with the regulations and best practices, and…” Yes. We’re very well aware. Here’s the thing - Jira can be customized to accept all of your custom fields that your EDF teams wants you to track. In fact, we went through an exercise with a Tier 1 bank mapping their excel-based user story document to a user story in Jira and had to make absolutely no compromises. In fact, the versioning and tracking in Jira is miles superior to versioning a static spreadsheet that was meant to be used by accountants to calculate the paperclip budget in the office. 

Suffice to say - we only skimmed the surface of what a shared toolset offers. There’s reporting, transparency, delivery KPI’s, and a ton of other metrics that help both the teams and the business make better, just in time decisions when building software.

Step 4: Invest in Continuous Integration and Delivery

More often than not we see very limited automation in how environments are set up, how source code is built, and how deployments are handled from environment to environment.

A sample scenario would look something like this: engineers are working on local machines and checking code into a source repository (TFS, Git, etc.). Each developer tests their code, builds the solution locally to make sure it runs as expected, ad infinitum until the project is finished and ready for testing. SIT/UAT phase kicks off and all hell breaks loose - integrations don’t work, defects come crushing down in endless excel documents exported from the enterprise quality assurance system, and timelines slip. Worst case - all projections of cost/timeline get thrown out the door because these problems were discovered once delivery was already finished, in other words - end of the project when majority of the budget has been spent.

Plan vs reality in waterfall

Continuous Integration to the rescue. Let’s start with a short definition. Continuous Integration is a development practice and toolset that enables engineers to integrate their code into a shared repository on a daily basis; CI build servers run automated tests and build the product, allowing individual contributors to detect defects early and often (read: as soon as they check in source code).

For the sake of keeping this white paper shorter than Homer’s Odyssey I will assume your applications have separated environments for development, staging, quality assurance/UAT, and production. If they do not - invest in infrastructure prior to standing up a build server and implementing continuous integration. Hardware requirements for dev/staging are likely a fraction of your production instance, so costs should be reasonable.

Word of caution - setting up continuous integration involves infrastructure teams, hardware, and software licensing. It’s important to note, however, that environment setup costs are downright puny when compared to the lowered quality assurance costs, quality of product, and increased delivery velocity. Let’s look at a specific working example as a guideline for CI costs/configuration.

Two leading platforms are TeamCity (licensed) and Jenkins (Open source). In either scenario you will need to set up a build server, as well as agent servers for your end-points (e.g. web application, native mobile apps, etc.). 

Infrastructure needs for agile

Here’s annual pricing for a template environment; prices are based on May 2016 publicly available pricing on Amazon AWS.

Download Environment Pricing Cheat Sheet

So what are the benefits? To start, you avoid a lot of the conflicts when merging code because everyone is using a single repository, committing, and building every day (and often several times a day). Deployment is automated. Single click. You go to the TeamCity control panel, initiate deployment and watch as you production environment is updated to a copy of the application in your QA/UAT environment. Or isn’t if there are build errors, but likely there aren’t because you have been building every day, multiple times a day, across multiple machines, and running automated tests. Your delivery becomes predictable. There are no last minute hot-fixes. No calls at night or on the weekends, unless you’re gathering up the team to have a beer and celebrate another successful product delivery. Your infrastructure, support, and quality assurance costs go down because you get more done quicker, with less.

I touched on several important subjects in this section and most - at a very conceptual level. More detail can be found in our article on Continuous Integration, Continuous Deployment, and Continuous Delivery.

Step four complete. Let’s continue and discuss how we can build higher quality software while spending less money on quality assurance.

Step 5: Integrate your QA and delivery teams

Banks use separate teams for handling delivery and quality assurance. The intent, from my understanding, is to remove conflict of interest in instances where a delivery team may misrepresent actual quality of the deliverable or not test the application enough. It also provides an additional layer of testing and assurance for software applications that are of sensitive nature. 

More often than not, a project plan will have a User Acceptance Testing and System Integration Testing phases somewhere at the end of delivery. During UAT testers run the product through specific scenarios that mimic production use. A user needs to check their account balance by tapping on “Account Balance”, a user makes a transfer and receives an SMS confirmation, and so on. SIT deals with the technical integrations and looks at the product as a whole - not just the application being launched to market, but also the dependencies, the integrations, authentication handlers, underlying API’s, etc. 

QA costs - waterfall

The problem with this approach is that a project manager planning this waterfall exercise in pursuit of better quality has absolutely no control over the outcome. Let’s pretend for a minute that the delivery team did a mediocre job. Quality Assurance spins up their defect Gatling guns and the delivery team is unable to resolve all of the defects in time for the original launch. Too late, too little to be on time and on budget. Once the issues are resolved they need to be tested again. Now the SIT cycle got pushed. The never ending cycle of crap hitting the fan continues. Also known as the “most stressful part of the year when we migrate software to production”, typically prior to the start of the subsequent fiscal year.

SIT has its own share of issues that are predictable. It is likely that environment access is not available when the product is being created, so the team has to rely on service stubs (essentially mock interfaces for the services that will become available once the product goes into the QA environment). That’s as effective as developing with a blindfold and, unless you’re the next American ninja, ends with terrible integration pains once SIT starts.

Let me explain why. Stubs, by definition, imply that requirements for all web service calls are defined at the beginning of the project and are locked in with detailed documentation on data types, field lengths, and data formatting. You and I know that a) detailed requirements are evil, and b) scope and requirements change throughout delivery. By the time we’re half-way through the project stubs are about as useful as a spoiler on a Civic. Furthermore, stubs don’t return real data, data that may impact how our application responds, and thus prevent us from building useful integration tests.

There are several tiers of integration you can bring to your delivery strategy - pick one that is easiest to pull off within your organization.

Integrate UAT into your delivery sprints. Have the QA team integrate into the sprint based delivery schedule and have them execute tests on fully functional software. While this may seem like more work, it actually significantly reduces churn since issues are resolved quickly and are included within acceptance of the sprint. Using the same toolset (such as JIRA) also allows the team to quickly map defects to user stories and validate the acceptance criteria used, avoiding the disconnected practice of tracking defects in an enterprise quality assurance system that is in no way visible to the delivery team.

QA automation costs - agile

Use development and staging environments with access to all dependencies. Similar to the above, there may be an initial cost to set up the infrastructure, but the QA cost savings are massive over the lifecycle of the product. Use stubs only when a service is being created in parallel to the product and even then make the service available as soon as it has functional calls. 

Introduce Unit Tests and inclusive code coverage. A unit test is a test that validates the smallest testable part of an application. It independently scrutinizes small functional blocks of the product for proper operation, reducing the quantity of manual testing long term. An automated testing strategy is another area that will introduce up-front costs into the project, however severely decreases manual quality assurance involvement throughout delivery. Unit tests are written to satisfy or fail based on the acceptance criteria within the user story in your backlog. Each time code is checked into a centralized repository, the build server (e.g. TeamCity) performs the build and executes the tests. If tests fail - the engineer is not allowed to check in the changes, automatically preventing defects. 

Escaped defect cost

Next Steps

An agile transformation for a large, heavily regulated institution with an established enterprise delivery framework will take time and grit to execute. You’re turning an aircraft carrier around, after all.

There are several critical topics that I did not cover in this article such as the effect of culture in team collaboration, transition of roles from waterfall to agile, product budgeting practices, and delivery metrics. Our experience has shown, however, that you have to start somewhere and these five mechanical, process based changes can have a massive impact on both lowering costs and increasing delivery velocity. 

Select products that are not part of the industrialized core. Pick initiatives that are customer facing, products that you can quickly iterate on without the need for deep architectural changes within the banks core banking platforms. I’ve seen this called the “Agile Edge” - a special group within the organization dedicated to innovation and rapid delivery.

A revolution needs it’s muscle from the bottom up, yet tactical support from the top down. Consider picking team members that are not politically tied to the “way things used to be” and provide senior executive sponsorship to push the agenda forward on environment setups, tool adoption, and change of process. The team will run into a lot of resistance and nay saying, but the results will speak for themselves. 

Aurimas Adomavicius

Want more industry news?

comments powered by Disqus
Let's Talk