Step 5: Integrate your QA and delivery teams
Separate teams are often used for handling delivery and quality assurance. The intent, from my understanding, is to remove conflict of interest in instances where a delivery team may misrepresent actual quality of the deliverable or not test the application enough. It also provides an additional layer of testing and assurance for software applications that are of sensitive nature.
More often than not, a project plan will have a User Acceptance Testing (UAT) and System Integration Testing (SIT) phases somewhere at the end of delivery. During UAT, testers run the product through specific scenarios that mimic produc- tion use (e.g., a user needs to check their machine status by tapping on “Report Status,” a user stops a job and receives an SMS confirmation, etc.). SIT deals with the technical integrations and looks at the product as a whole, the dependencies, the integrations, authentication handlers, underlying APIs, etc.
The problem with this approach is that a project manager that plans this waterfall exercise in pursuit of better quality has absolutely no control over the outcome. Let’s pretend for a minute that the delivery team did a mediocre job. QA spins up its defect Gatling guns and the delivery team is unable to resolve all of the defects in time for the original launch. Too little, too late to be on time and on budget. Once the issues are resolved they need to be tested again. Now the SIT cycle got pushed. The never-ending cycle continues. Also known as the “most stressful part of the year when we migrate software to production,” which is typically prior to the start of the subsequent fiscal year.
Increasing QA cost through product lifestyle
Investment into QA automation saves costs
SIT has its own share of issues that are predictable. It is likely that environment access is unavailable when the product is being created. Therefore, the team has to rely on service stubs (essentially mock interfaces for the services that will become available once the product goes into the QA environment), which is ineffective.
Stubs, by definition, imply that requirements for all web service calls are defined at the beginning of the project and are locked in with detailed documentation on data types, field lengths, and data formatting. Detailed requirements are evil and scope and requirements change throughout delivery. By the time you’re half-way through the project, stubs are useless. Furthermore, stubs don’t return real data, data that may impact how the application responds, and thus prevent you from building useful integration tests.
There are several tiers of integration you can bring to your delivery strategy. Pick one that is easiest to pull off within your organization.
Additionally, integrate UAT into your delivery sprints. Have the QA team integrate into the sprint-based delivery schedule and execute tests on fully functional software. While this may seem like more work, it actually reduces churn significantly since issues are resolved quickly and are included within acceptance of the sprint. Using the same toolset (such as Jira) also allows the team to quickly map defects to user stories and validate the acceptance criteria used, avoiding the disconnected practice of tracking defects in an enterprise QA system that is virtually invisible to the delivery team.
Use development and staging environments with access to all dependencies. Similar to the above, there may be an initial cost to set up the infrastructure, but the QA cost savings are massive over the lifecycle of the product. Use stubs only when a service is being created in parallel to the product and even then make the service available as soon as it has functional calls.
Introduce unit tests and inclusive code coverage. A unit test is a test that validates the smallest testable part of an application. It independently scrutinizes small, functional blocks of the product for proper operation, reducing the quantity of manual testing long term. An automated testing strategy is another area that will introduce up-front costs into the project, yet severely decreases manual QA involvement throughout delivery. Unit tests are written to satisfy or fail based on the acceptance criteria within the user story in your backlog. Each time code is checked into a centralized repository, the build server (e.g., TeamCity) performs the build and executes the tests. If tests fail, the engineer is not allowed to check in the changes, automatically preventing defects.