Engineering

5 ways to boost UI automation effectiveness

UI automation is rapidly becoming a mainstream occurrence. It’s no longer a mystical tool that no one knows how to implement. However, people still struggle to make it effective. It doesn’t have to be a negative experience. There are 5 common issues you can address and solve—making UI automation more effective and requiring less maintenance.

Reduce random failures to zero

Often test automation is a part of continuous integration (CI) and executed after each new build or once every night. Sometimes, we receive emails about failing tests—when something fails due to framework issues, application specifics, animation or page loading timeouts—giving UI automation a bad rap. These hiccups can deflate team members using automation results or trying to add more coverage for automated tests. After a while, this can fuel more errors, as they might start ignoring failed test cases. 

Automation should not produce random failures. In fact, false positives results should be reduced to zero. 

How can you net zero failures?

Investigate and address each random failure. First, create a specific fix and avoid using sleep or any other type of uncertain fix. Next, rewrite unstable functionality. Even though it's not best practice, you may want to consider making a loop to check that the element exists for x seconds before giving up. Then, wait for a page to fully load or animation to finish before advancing to assertions or any other actions. Usually, the hardest part is identifying the root cause for the test failure. Fixing it takes less time.

Rewrite the test case. Sometimes, the issues are in the workflow of the test case and rewriting it might be helpful. For example, we once had a test case where we searched for customers by accounting ID. The issue we identified was that the field was not being displayed in the search screen. So, we had to navigate through each result info view in order to ascertain the accounting ID. Because the workflow was unstable, it failed once per 10 runs. Instead, we rewrote it to assert the list of customers to contain 5 predefined customers with specific accounting IDs. This not only made test cases stable but also cut execution time by 80%.

Inject custom locators to the application

If you’ve ever written automated UI tests, you’re probably familiar with XPath, CSS, element chaining, and selecting n-th element. I personally dislike them. I find they’re cheap options to locate elements. 

Example

This example uses a very common locator [class=”btn”]. This can be any button on that page and there might be more buttons added in the future. Another option in this example uses the power of XPath and shows how flexible it is. However, creating a chained element locator featuring 26 other elements can be extremely unstable. If any of these elements change, the chain will stop working.

Instead of struggling with these issues, inject custom locators in the application itself. This works independently of code or layout changes or responsive design. There’s no need to rely on other page elements or labels. Plus, custom locators keep test code much cleaner, which is a more sustainable solution for the project.

Adding locators does require some basic HTML knowledge. (However, if you are familiar with writing E2E code, this shouldn’t be a problem.) You do need to agree with developers on where to inject these custom locators. Simply ask developers to help you out. Then, add these locators so they’re available when the feature is deployed.

Database restoring for test setup and tear down

In general, there are 5 approaches to test data management effectively.

  • Don’t use any test data.
  • Use same E2E steps to prepare data for tests.
  • Use API calls to setup data.
  • Use SQL scripts to setup data.
  • Use DB backup and restore it before each run.

You should consider the benefits of using database restore to prepare all test data. 

Fast and reliable test data preparation

There’s a very slim chance that database restore would fail. In one case, we made our test data backup light to decrease the time it would take to back up a restore. As a result, it now takes around 16 seconds to restore and less than 3 minutes to back up the database.

Easy test data management

If you want to add test data, you only need to:

  • Know how the application works.
  • Create data manually.
  • Click the backup button to save it for your next restore. 

A couple of notes for easy test management: There’s no need for SQL or API knowledge or to reuse E2E test steps to create test data. When you have a fresh start before each run, there’s no need to worry about polluting database and thus decreasing performance over time.

Less maintenance in a long run

If you apply the same database changes to UI test database and the manual testing database, the data will always be up-to-date with the latest changes. There is no need to maintain all your API calls. During development, they might be deprecated, changed, or extended. As a result, if you apply the changes to the database, you won't need to maintain API changes. The same principles apply to SQL. 

Invest in infrastructure for E2E testing

Are you leveraging DevOps methodology to:

  • Automate boring and repetitive tasks?
  • Reduce human error probability?
  • Speed up workflows?
  • Work toward self-sufficient test automation?

With DevOps, teams can achieve all of the above and more! 

We have restore and backup procedures as one click. We also run E2E tests running automatically on new deployments to test environment. We go a step forward and have database routing implemented. This allows us to use one environment for multiple purposes by routing users to specific databases.

 

Imagine there’s a test environment used for manual testing, automated testing, and developing—which is happening at the same time without any overlap. In this example, we’d redirect some users to specific databases. All common users are redirected to DB 1, but specific users, used for automation purpose, are redirected to DB 2. Thus, we can restore DB 2 without other users interfacing with the same environment ever noticing. The reduction of environments results in lower costs.

When we develop automated UI test cases, we often need to restore the database. As a result, we keep separate databases for UI test development. By using a specific user, you can be redirected to the right database to run tests, restore, and back up with minimal disruption.

Keep in mind, you can replicate this approach in staging (regression, UAT, and performance testing) and production (end user, smoke, and performance testing) environments. All of the above is achieved by leveraging DevOps. It’s the result of collaboration instead of having different disciplines working separately.

Promote clean code practices

Treating UI automated test code as inferior to production code can result in technical debt, readability, maintenance, and other issues. Too often people overcomplicate code, overuse advance functionality, and then the code becomes too hard to read and maintain. To avoid these mishaps, follow clean code practices.

Reading code should be pleasant. The core indications of readable code are: 

  • Variables and functions that are self-explanatory—avoid using comments
  • Functions that are focused, short and reusable
  • Global patterns, coding style, and conventions
  • Performing code reviews
  • DRY rule
  • KISS principle

Example 1: Illustrates how overcomplicated code can be simplified. 

Example 2: Features two lodash map functions and multiple transformations on the same line—making it challenging to understand which data transformations are happening and what is being returned as a result.

These instances present classic situations for system bugs and the need for complicated maintenance. Complexity should be avoided in both UI automation and the application level. There are many ways to write code. However, in most cases, the best option is simplicity.

In summary

By following these 5 suggestions, you should end up with a stable and robust UI automation. Not all suggestions might be applicable for your specific project, but they should provide more options for your next implementation.