A Software Tester's Guide To Usability

A Software Tester's Guide To Usability

Check out the latest article from The Testing Planet: "A Software Tester's Guide To Usability" by Claire Reckless

By Claire Reckless

A Holistic Look At Usability Testing 

Usability is an important quality to consider when building software for your users. An application built using cutting edge tech, offering the latest features, won’t generally be well received if the usability is poor. We’ve all been annoyed with new updates to apps we rely on, where the functionality we use all the time, is now buried in a rabbit hole of buttons and links.

Think about your favourite website or app. Why is it your favourite? Because it performs a useful function? How about because of its usability?

Let’s take a look at what usability testing is, what it is not, and how it can be performed.

Defining Usability

Usability can be defined as:

The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.
- ISO 9241-11  

Usability is the effectiveness with which the user can perform the necessary tasks within the software, and whether there are any unnecessary obstacles which prevent them from doing so.

It can be thought of as being comprised of five quality components:

  • Learnability: The design should be intuitive. How easy is it for users to accomplish
    basic tasks the first time they use the product? Do they make any errors? Do they get frustrated with the application?

  • Efficiency: Once users have learned the design, how quickly can they perform tasks? If they struggle with what should be a straightforward task, why is this? Can the number of clicks a user has to make to complete a task be reduced?

  • Memorability: When users return to the design after a period of not using it, how easily can they reestablish proficiency?

  • Errors: How many errors do users make, how severe are these errors, and how easily can they recover from the errors?

  • Satisfaction: How pleasant is it to use the design? Does the user like using the application?

It’s important to distinguish Usability testing from other similar sounding tests and terms. For example, User Acceptance Testing (UAT) and User Experience (UX) are similar sounding but have different goals and meanings. Let’s look at the distinctions between some of the terms that are often confused with usability testing.

User Experience (UX)

Usability does not equal UX. One cannot assume, that because something is intuitive and easy to use, that it is a good UX. UX is a more holistic view of whether something is useful, usable, and desirable, creating an emotional connection and making a product pleasurable to use. Usability could be thought of as a subset of the whole UX.

User Acceptance Testing (UAT)

The difference between usability testing and UAT can seem a little less clear. Usability testing will generally involve the end user and involves assessing if there are any obstacles which could be removed to make the product easier to use. An example could be reducing the number of clicks to get to an often used screen. UAT involves a client stakeholder and serves as more of a gateway to determine if the product meets the overall needs of the client. A product could meet all the predefined Acceptance Criteria set by the business, yet when the end users get their hands on it, they find the product experience painful to use.

A/B Testing

A/B testing is a way of telling which version of a variable on a website performs better by comparing two versions of that variable, hence the name A/B. It’s carried out by presenting different versions of the site to different users. An A/B test might want to answer the question: Does this layout generate more sales than the other?

Usability testing is not the same as A/B testing, though they are similar. A/B testing evaluates the results of alternative versions of something, for example, a button or a page layout. It tends to yield entirely quantitative information, often in "live" situations. Whereas, usability testing observes behaviour, usually in controlled situations, and (generally) yields much richer qualitative information with much smaller samples.

Usability testing involving the end user will provide much deeper insights into the reasons behind the results of A/B testing. A/B testing will tell us that a version of a button was used more, but not if it is more usable than the other.


Accessibility is a subset of usability and is a massive part of what makes software usable for more than the average computer user. Not every user can or will use a mouse, not every user takes in information in the same way. It’s important that what you build is as inclusive as possible and caters for users with a whole range of disabilities. If a large percentage of users cannot physically use your website or application, that’s a huge restriction. A number of software companies use the Web Content Accessibility Guidelines (WCAG) or Voluntary Product Accessibility Template (VPAT), evidencing how they conform with Section 508 of U.S. Rehabilitation Act of 1973.

Performing Usability Testing

Who should actually be performing usability testing of an application and how should they be doing it? Testers can play a vital part in testing for usability by using some of the concepts and heuristics employed by usability experts.

The Development Team 

Usability should be considered from the moment work starts on the product, from design to implementation. Testers can assess a design, proof of concept (POC), or prototype and start to think about the issues a user could encounter.

Anyone on the development team can test for usability without conducting a full blown usability test. A developer or tester can make an assessment that a piece of functionality will be confusing or complicated for a user to understand and log a bug. This can prompt discussions with the wider team about the design. As testers, we try to be advocates for the user so we should always have them in our mind when we are exploring a software product.

Try making use of Tours to explore the product as different kinds of users, determining if you can complete the types of tasks each user would need to be able to complete, and note any potential problems along the way. Personas can really help with this. A persona is a fictional character created to represent a user type that might use a site, brand, or product in a similar way. Consider the goals, behaviour, and skills for the different types of users for your product when testing. Each type might uncover different usability problems. 

Think about the accessibility needs of the different types of users who might use your website or application.

  • Will they need to use screen readers such as NVDA if they have a visual impairment?

  • Are all the controls keyboard accessible? Try completely unplugging your mouse and see if you can still use your product!

  • If you use video content, are there captions or sign language?

  • Will the colour scheme work for those who have difficulty distinguishing between certain colours?

  • Do you have any flashing content that might cause seizures?

There are are a number of tools available which can help with accessibility testing by enabling you to enter a URL or upload a page of HTML and providing feedback on violations of different standards.  These tools also come in the form of add-ons for browsers like Chrome and Firefox. These tools will output a list of potential accessibility problems for you to evaluate and act on.  

The drawback of solely relying on the developers and testers to assess usability is that there’s likely to be a certain degree of bias within the development team. It’s far harder to think like a real user than you would imagine. In assessing usability the development team will bring to the situation a level of technical competence and other knowledge that the user base cannot be guaranteed to have. It’s extremely difficult for a tester to evaluate usability from the same perspective as a user.

User Experience Or Usability Specialist

In my current employment, I’m fortunate to work with a team of really talented User Experience specialists. They work within our agile teams and during the course of development, they will perform their own testing on the features we are developing. Even if you don’t work with someone with specialist skills in usability, you can still apply some of the techniques they use.

A cognitive walkthrough is a formalised way of imagining people’s thoughts and actions when they use an interface for the first time. The usability specialist will ‘walk’ through a task step by step, asking four questions as they go:

1. Will the customer realistically be trying to do this action?

Think about expectations, knowledge, and experience of the user. Be realistic about what the user can handle. Defining user roles can clarify a user's ability and knowledge.

2. Is the control for the action visible?

This question helps to identify hidden controls and determine if it’s obvious to the user what they need to do.

3. Is there a strong link between the control and the action?

This highlights problems with ambiguous or jargon terms, or with other controls that look like they might be a better choice. It should be clear that a specific control or combination of controls is needed to perform an action. 

4. Is feedback appropriate?

This question helps to uncover problems where feedback when performing actions might be missing, easy to miss, poorly worded, inappropriate or ambiguous

Usability Heuristics

A number of Usability Heuristics exist and you can apply these to determine if the product adheres to accepted usability principles. The following are taken from an example of these, Jakob Nielsen’s 10 Usability Heuristics for Interaction Design:

1. Visibility of system status

The system should always keep users informed about what is going on, through appropriate feedback within a reasonable time.

2. Match between system and the real world:

The system should speak the users' language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.

3. User control and freedom:

Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Supporting undo and redo functions.

4. Consistency and standards:

Users should not have to wonder whether different words, situations, or actions mean the same thing.

5. Error prevention:

Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.

6. Recognition rather than recall:

Minimize the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.

7. Flexibility and efficiency of use:

Accelerators — unseen by the novice user — may often speed up the interaction for the expert user, meaning the system can cater for both inexperienced and experienced users. Allowing users to tailor frequent actions.

8. Aesthetic and minimalist design:

Dialogues should not contain information which is irrelevant or rarely needed. Every irrelevant unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility. 

9. Help users recognize, diagnose, and recover from errors:

Error messages should be expressed in plain language (no codes), precisely indicating the problem, and constructively suggesting a solution.

10. Help and documentation:

Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and should not be too large.

End Users

Whilst feedback from the development team and usability specialists can prove invaluable when building a software product, the ultimate test of usability comes from the end user. You could substitute them for colleagues outside of the development team, but ideally, you’d use real customers, real users, and get real feedback. Getting your product in front of the people who will be using it day to day and finding out if they can use it for its intended purpose, in a way that suits their needs, is ultimately the best way of getting this feedback.

There are different methods of usability testing with end users:

  • Moderated in person - Where the facilitator is located in the same place as the participant(s).

  • Moderated remotely - Where the facilitator uses software such as Webex, GoTo Meeting, or Skype, to monitor the user or users’ actions via screen sharing.

  • Unmoderated remotely - Where software is used to monitor user actions and feedback data such as screen recordings. There are various usability testing services which provide access to features such as:

    • Heatmaps - Visual representation of where users click on a page, but can also be where they look (eye tracking), hover, or scroll

    • Uploading a mockup of a design for the user to provide feedback

    • Exit surveys - a brief questionnaire to find out why the user is leaving the site

    • Audio and Video recordings

Setting The End User Testing Mission

Like other types of testing, it’s important to understand why you are performing usability testing, and what information you hope to gain from engaging with end users to get their feedback. One way to do this when setting up your usability test is by framing your goals in the form of questions you want to answer.

There are different ways of approaching this. You can set open ended tasks, for example asking your user to spend a few minutes exploring the application. Alternatively, you can give them more focused tasks or ask more specific questions, such as, can you:

  • Find a specific item in the product catalogue?

  • Make a purchase using express delivery?

  • Create a purchase order and email it to a supplier?

  • Amend an existing order?

It’s important when using specific tasks and questions, to avoid leading the user too much and creating bias. A good example might be the contrast between a test charter and a test case. The charter provides focus but gives room for exploration, whereas the test case provides a set sequence of steps. Your end usability testers need guidance, rather than exact steps. You are testing to see if they can reach the expected goal, not if the functionality works.

Wording The Question Correctly

How can you know whether to use open ended or specific questions or tasks? Specific questions may come in useful when you wish to concentrate testing on a certain feature within the product, or if your application is particularly complex. If you want to assess usability of the general workflow an open ended approach may be more suitable.

Also, consider whether you are asking open or closed questions and make sure you are giving your users enough room to give important feedback. If you ask a question starting ‘Can you….?’, be prepared for the answer to be a very short ‘Yes’ or ‘No’!

Measuring Feedback

In order to use the results of a usability test to make decisions, we must be able to record and measure the results. Usability metrics can help with this.

Examples of metrics you might collect information on are:

  • Success rate (whether users can perform the task at all)

  • The time a task requires

  • The error rate

  • Users' subjective satisfaction.

You could collect information for different types of users, for example: expert or beginner. This might be particularly important when testing something which is used frequently by its user base, as opposed to a website or application which may only be used occasionally.

As with all metrics related to testing, the numbers in isolation won’t necessarily tell you what decisions to make. The results will require further analysis and discussion to help you decide the next steps.


Nielsen Norman Group - Usability 101 - Introduction to Usability

Nielsen Norman Group - 10 Usability Heuristics

Nielsen Norman Group - Open Ended Questions

A/B Testing or Usability Testing - Which one is better?

U.S Department of State - VPAT form

Web Content Accessibility Guidelines

Testing Tours

Personas (User Experience)

Heuristic Evaluation - Wikipedia

5 Types of Usability Tests

Open Ended vs Specific Tasks and Questions

Cognitive Walkthroughs - User Focus

Cognitive Walkthroughs - Usability First

ISO.Org - Ergonomics of human-system interaction — Part 11: Usability: Definitions and concepts

Usability Geek - Accessibility Evaluation Tools

About Claire Reckless

Claire Reckless is a tester at Avecto, working on endpoint security software. She enjoys finding new ways of working and helping people learn how to become better testers. Her domain expertise also includes financial and ERP software. Claire lives in Manchester, with her husband Rob, their cat, and dog. She also enjoys running as time allows. You can find Claire on her Twitter.

Enchanting Experiences: The Future Of Mobile Apps - Parimala Hariprasad
Five Optimization And Performance Tools That Enhance Website User Experience
Your Brain on Usability: UX for QAs - Jessica Versaw
🕵️‍♂️ Bring your team together for collaborative testing and start for free today!
Explore MoT
Episode Four: The Practitioner
The Testing Planet is a free monthly virtual community gathering produced by Ministry of Testing
Introduction To Accessibility Testing
Learn with me about what Accessibility is, why it's important to test for and how to get your team started with an Accessibility testing mindset