The importance of usability testing on products
Whether you are a product manager, product designer, or engineer, it is every product team member’s dream to have an opportunity to conduct usability testing. It provides a chance to get authentic, unsolicited feedback in real-time from first-time users. The opportunity to validate that the product or feature satisfies a user's needs or gain insight into the product's shortcomings. Usability testing is rooted in observation. Product teams are challenged to make sense of the observations and then identify a value proposition to leverage with stakeholders to validate or justify product enhancements.
Our team recently had the opportunity to conduct usability testing on a soon-to-launch legacy rebuild. Armed with the test environment, scenarios, and test script, we set out to do our research. After two successful days of testing, we excitedly dove into the notes and observations and, quickly, began to feel overwhelmed. Initially, we struggled to extrapolate our notes into quantifiable information to present to the stakeholders. How would we take commentary like “the user got confused by an icon” or “the user was delighted with this feature” and use it to prioritize recommendations for stakeholders? Here's what we found.
The 4 step usability testing action plan
The next time you are digging through user comments, feedback, and actions, try replicating the following four steps to cut through the noise and get to the actionable feedback.
1. Identify critical tasks and assign each a value.
Usability testing starts with a script where an interviewer from the product team has the interviewee conduct various tasks when interacting with the prototype. As the interviewer, observe and note the user’s actions, movements, and thought process when performing tasks. To begin compiling meaningful data on these observations, determine how critical a particular task is to the business or the user.
The example below exhibits an instance where viewing a dashboard might be assigned a lower value of 1 or 2 (lower criticality) and when hitting ‘submit’ on a form is assigned a value of 4 or 5 (higher criticality).
2. Establish typical patterns (actions) the user takes to accomplish the task. Then weight each action by their ability to complete the task.
To shorten, call this the “action effect.” How does the user’s action impact their ability to accomplish the given task? Building on the above example, add a new column to the table noting the action effects.
Slowly but surely, adding a quantitative value to the observed qualitative data (user actions) leads to strategic insights into the product and quickly identifies areas for improvement.
3. Input the frequency of users that display similar actions.
This action adds a piece of data connected to the result.
4. Calculate the severity for each task users complete.
The final step to gaining quantitative data from qualitative observation is the product of the 3 data points. Calculate severity by multiplying Criticality x Action effect x Frequency.
The scale of severity varies—depending on how the data points are scaled. Generally, I recommend a scale of 1 to 5 to assess criticality and action effect. It's best to see a low severity score. In an ideal world, there's a highly important task and all users (frequency) accomplish the task with no issues (low action effect).
Within product teams, it is easy to become overwhelmed by the number of comments and suggestions about the entire application(s) or particular feature sets.
Take a step back, identify common trends, and assign a quantitative value to patterns/actions. By doing this, the data becomes much easier to decipher. It’s easier to spot where users are struggling and where they are succeeding. By taking the four steps outlined above, prioritization becomes a breeze which sets the product team off on the right track to roll out the next feature or improvement to delight to the end-user.