The complete guide to test automation

The shortcuts and best practices for building a bulletproof testing automation strategy

Download white paper

Pitfalls of testing after coding

How can test automation be worthwhile and effective? To help answer the question, let’s first examine the commonly used develop first, test-later approach with tests written after everything else is done.

The process is as follows:

  1. Implement a change, either as a new feature or a fix to an existing feature.

  2. Manually test the change and identify bugs.

  3. Debug and fix.

  4. Repeat as necessary for all features in a build.

  5. Build or extend an automated test suite.

The conventional test-late approach seems (to the uninitiated) to be a natural extension of a conventional development process. Though seemingly easier, the build-then-test approach in software development has significant disadvantages.

The first impact is on new tests, which would be run manually at first. As the team looks to automate, updates are made to existing test cases. Little time is spent on additional edge cases. Tests are written at the end of the development phase cover the exact functional footprint of the new code. In reality, it’d be far better to reassess what the code should or shouldn’t do. The results give the team a false sense of confidence that the code base changes work well while inadvertently overlooking the edge cases.

It’s important to realize something more subtle. Immediately after the new tests are run manually and pass, confidence increases as the changes work according to expectations. The inclination here is to focus only on functional coverage and minimize verification against business requirements.

When pausing to think carefully, risks jeopardizing test automation success become apparent at this point.

  • The implementation is likely to have an undue influence on test construction and maintenance.

  • No other tests are likely to be written beyond what is necessary to reach the coverage goal.

  • There will be a tendency to selectively write tests only for trivial or well-designed parts of the system.

Implementation-driven test inadequacies occur

When deriving tests directly from implementation, there is a tendency to focus on each code class and function instead of on the module boundaries. Such tests tend to rely on implementation details rather than behavior expectations at the module level. The reason is that the test writer wants to verify the correct functionality. However, there is often a lack of concern for the corresponding business requirement(s), with tests driven mainly by the code instead of business requirements. The approach results in a test suite tightly coupled to the system.

Problems arise as it becomes challenging to understand test design and the tests don't represent an accurate system specification. Moreover, tightly-coupled tests may fail in the future when the code changes—even if user or integration behavior doesn't change. False positives increase and multiply. Taken together, all of these issues lead inevitably to an increase in maintenance effort and a bloated, sluggish test suite. Likewise, stopping to write tests later decreases confidence in the test suite's efficacy since there is no assurance that testing covers all of the critical cases.

The result is an incomplete regression test suite and an incomplete test-based specification, which can only offer an incomplete assessment of the system design. In addition, there is a ubiquitous tendency to do what's easiest, especially for teams with a heavy workload. Frequently, the easy option is taken to test only the code that requires the least effort. However, such code is quite often relatively simple or of sound design. Consequently, taking the easy route leaves the most complex and poorly designed modules without viable regression tests. The test-based system specification will be deficient in these areas, and there will be no identification of opportunities for design improvements.

Issues arise adding tests after feature development

Yes, there is a strong tendency to write tests only after a complete effort to develop the feature. Then, if any of the tests indicate design flaws, it is likely that some of the issues won't be fixed. Any non-trivial redesign would require non-trivial effort, which would require adding more tests, rewriting existing tests, and more testing.

Consequently, there is a tendency to rely on tools that enable the testing of flawed designs. There is already a supply and demand for tools that will test private methods and mock private fields, for example. Tests written at the end of the development cycle tend to result in no significant improvements to system design.

Consider the implications for new feature development, in which test automation begins only when a team achieves sufficient confidence in system quality primarily via manual testing. Automating a test suite has no material impact on reducing manual testing effort and results in doubling of the testing effort that only prepares the test suite as an investment for the future (which may never materialize).

With automated testing, the team may receive quality feedback more quickly for changes to existing functionality. However, as already noted, likely, the team won't have much confidence in the test suite. When considering the extra effort with no significant increase in confidence, many teams begin to think of automation as a poor investment of effort.

When a team begins to automate testing, it is likely to see that they are working inefficiently if they have to wait to write tests only after the developers write and test the code. Both QA testers and developers tend to delay until the last practicable moment—after doing all other development work, including manual testing. Typically, it is only after development work is entirely complete. The work begins on the unit and integration tests.

Realizing that significant effort will be necessary to automate tests written after development is finished, the team perceives a high risk that it won't achieve significant benefits. Commonly, the hope is that a test suite that is built (or extended) only after the development phase may provide all the benefits of automated testing. The reality is that many teams only experience a few benefits while expending an excessive amount of effort.

Continue to:Getting started with test-driven development