An alternative approach to conducting a heuristic evaluation

In a prior post, I detailed how to leverage user feedback to build the right software products while making sure you’re also building them the right way. But when we’re looking to specifically improve on an existing digital product or application design, one of the first things we do before involving users is to conduct a heuristic evaluation.

A heuristic evaluation is a great way to uncover existing issues and start fixing problems before you test your product with users. In this post, I outline our approach for conducting a heuristic evaluation and how we recently applied it to a client’s specific circumstance.

A heuristic evaluation is a practical way to solve a problem

People use heuristics all the time in their everyday lives (often unknowingly) to accelerate the process of finding solutions. Generally, people are looking for solutions that are "just good enough," making use of mental shortcuts to minimize their decision-making efforts. We commonly refer to these as, “rules of thumb.”

Some of the most practical heuristics are strategies that are derived from previous experiences. In the world of design, we refer to these types of heuristics as patterns: reusable solutions that have been identified to address commonly occurring problems. Like a seamstress who creates a pattern from a template, the idea is that we’re able to leverage cumulative industry knowledge to render solutions that are known to work so we don’t always reinvent the wheel.

This was the basis for what we now know as “usability”: a branch of software engineering that emerged in the early 90s thanks to Jakob Nielsen, a software consultant and user advocate. Nielsen identified a number of these commonly occurring problems with how we use software interfaces and how fixing them would significantly improve the extent to which a product is usable.

Evaluating a product against industry standards

Nielsen’s Heuristics (or the “10 Usability Principles,” as they’ve come to be known) was the first measuring stick we could use to objectively evaluate a user interface. Nielsen’s idea was that by evaluating a product against these heuristics early in the design process, we could reduce the number and severity of issues that users might find later on during testing.

At Devbridge, we’ve taken Nielsen’s idea a step further and developed a framework that, based on design patterns and heuristics, allows us to quickly and easily rate a product against a much more comprehensive set of heuristics. Our framework includes about 50 dimensions broken out across the following categories:

Heuristic evaluation chart
  • Information architecture: How information is organized and presented to the user. This includes dimensions related to structure, organization, language, and help.

  • Interaction design: How interactivity is built into and represented by the product. These dimensions relate to navigation, functionality, interactive touchpoints, and error handling.

  • Visual design: How visual design reinforces architecture and interaction. Dimensions here include visual language, representation of interactivity, layout, color, typography, iconography, and graphics.

Designers can use our framework to objectively evaluate each dimension on a scale of 1-5, with 1 being very unsatisfactory and 5 being very satisfactory.

By evaluating products granularly, we’re able to pinpoint the source of each issue as well as gauge the extent to which they have an effect on the user experience. Small issues tend to have an impact on one or two dimensions (e.g., input labels not being clear), while big issues often affect multiple dimensions across multiple categories (e.g., the product is difficult to navigate).

We recently used this framework with a client that asked us to evaluate one of its projects. The product was in production and had been developed with almost no design oversight. Although ordinarily, the heuristic evaluation would help inform the starting point for a new version of a product, here we used it to inform and prioritize progressive refinements.

In this product, the team had made extensive use of accordions for a user to navigate through data. The idea was that a user would be able to drill down to see more information by progressively opening accordions. However, our heuristic evaluation uncovered several issues:

  • It wasn’t evident that rows could expand

  • Row labels were ambiguous

  • The interaction to get to meaningful data was convoluted and difficult to navigate

  • It was difficult for users to compare information

In the end, the purpose of the page had been obscured and the value that the product could provide was significantly stymied. The heuristic evaluation helped call out specific issues that, if fixed, could alleviate the problem. The analysis also uncovered a larger issue: The specific interaction mechanism may not have been the most appropriate.

The results of the analysis gave us a pretty good idea of what changes needed to be added to the backlog and how we should prioritize them.

Deciding when to do a heuristic evaluation

I mentioned earlier that we’d ordinarily start with an analysis of an old product when we’re working on a new version. However, in our example above we were able to leverage our heuristic evaluation framework while in production. Because a heuristic evaluation is so quick and inexpensive, there’s no reason for a product team not to utilize it to ensure they’re on the right track.

It’s important to make a distinction, however: How usable a product is and how useful it is are two completely different things. In other posts about user research, we’ve talked about leveraging user research to uncover unmet needs and to identify opportunities for new features. A heuristic analysis is unable to produce any insights of this sort, since it is primarily evaluating what’s there, but not what isn’t (or what could be) there.

A heuristic evaluation is no substitute for usability testing

In the case of the product with the accordion, we used usability testing to validate some of our assumptions about issues and ways to solve them. Testing with users confirmed that we had a usable and launchable product, and that some of the other changes we identified could be pushed down on the product’s backlog.

Providing a great user experience is no longer considered a competitive advantage in product development—it’s considered table stakes. At the end of the day, we’re experts in designing products, but not experts at being users. A heuristic evaluation is a great step towards improving a product, but it is no substitute for going out and evaluating your product directly with users.

Optimal experiences with service design

Optimal experiences with service design

Read more