Reading:
Accessibility Testing Of A Government Website: Experience And Recommendations
Share:

Accessibility Testing Of A Government Website: Experience And Recommendations

Learn about Aaron Flynn's experiences of carrying out Accessibility Testing for Government Websites on the Testing Planet

Content in review

By Aaron Flynn

In this article I relate my experience working on a development team to create a government website. The site allows people to search and view information through our website. It requires a user to use maps, fill out forms, and provide payment details. Users can choose to use the free results data or buy an “official search,” which is certified and provides indemnity protection.

I’ll share our accessibility testing approach, what we learned from an audit, experiences with users, and I’ll provide additional resources for learning. If you don’t know much about accessibility testing, or are looking for new ideas, this article is for you.

 

What Is Accessibility?

Accessibility is a broad area which aims to “make things available to as many people as possible.” I will focus on web accessibility in this article. But accessibility measures aren’t limited to the web or software. Buildings with hearing loops and access ramps are examples of accessibility features in the physical world.

The Web Content Accessibility Guidelines (WCAG) aim to make web content more accessible. They have created standards for meeting this goal, known as WCAG 2.0 and WCAG 2.1 . Work on WCAG 2.2 and 3.0 are underway so it is worth being aware that these standards evolve over time.

The version 2 standards are based on four key principles.

  1. Perceivable – we know there’s something there

  2. Operable – we can interact with the thing

  3. Understandable – we can understand the information

  4. Robust – we can use the thing in different ways

 

Legal Requirements For Accessibility

Government departments have a responsibility to make our services available to as much of the public as possible. This is the moral responsibility of development teams as well. The people I’ve worked with across government are the most user centred practitioners I’ve ever met. They advocate for accessibility because they believe it’s important.

We do have legislation to follow. Digital services must be “WCAG 2.1 AA” compliant to meet government accessibility requirements. These requirements are underpinned by The Public Sector Bodies (Websites and Mobile Applications) Accessibility Regulations 2018 and The Equality Act 2018.
 

Our Accessibility Testing Approach

How do we make sure that our service meets or exceeds accessibility standards? Software checks can verify some things easily, while they fall short in other areas. For example, a program can easily check HTML for a H1 element. But it can’t tell if the content makes sense in the wider context.

Context and constraints are key. Having the right people testing things at the right time worked well for us. Our approach included three broad groups of people,each performing some level of testing against the service.

I use “the team” in the broadest sense: it should include everyone involved in developing the service. That means product owners, design, development, etc.  As testers we cannot “test accessibility into” our services. But we can work with people in other roles who put accessibility at the forefront of what we are building.

 

Who Should Test The Service?

  1. The team

Build something accessible, and test throughout

  1. Users with accessibility needs

Real users to use our service and provide feedback

  1. Independent accessibility team (Bonus)

A small independent group of people outside the development team to provide audits 

 

Automated Tools: A Word Of Caution

I’ve already mentioned that automated tools can’t test for accessibility on their own, so why should we use them? My honest opinion is they give quick feedback. Automated checks passing don’t mean a page is accessible. But if they fail, then we can be pretty confident the manual checks will fail too.

Tools like aXe and wave integrate with browsers and continuous integration (CI) pipelines. While they won’t check everything, they will quickly check some things. Even the best automated tools only catch about 40 percent of accessibility issues. Each tool checks for different things, and does so in different ways. So you could see different errors depending on the tool used.

Despite these limitations, automated tools are better than nothing. Use them to help reduce your accessibility risks. They are invaluable in a reliable accessibility approach.

 

Manual Checks: The Good And The Mediocre

I’ve used manual scripted tests and heuristics when testing for accessibility.

The main constraints I have seen with manual testing are managing:

  • Gathering information

  • Time spent

  • Evidence for compliance

Understanding the accessibility of a changing service can be difficult. It’s a struggle to audit a rapidly changing service. By contrast, a more stable service wouldn’t be as difficult to audit.

Scripted tests generally offer good testing consistency. This could be ideal when you need a lot of repeatable information, such as for an audit. But this thoroughness comes at the expense of time spent. A robust suite of tests will provide a lot of information but could take a long time to gather.

Using heuristics can quickly provide us with information. For example, checking that instructions are clear, error messages help a user correct a mistake, and use the site with only a keyboard.  The downside is that we generally don’t have a detailed record of testing for compliance purposes.

I’ve used heuristics as part of “Accessibility exploratory sessions,” in which we used session-based exploratory testing (SBET) to try to find a sweet spot with the three constraints already mentioned. Timeboxing the session helps us manage the time spent testing. Heuristics help us understand the accessibility of the service. Using charters we can record our findings to share with the team.

You should try things out and find what works best for you. I’ve used both approaches depending on what I needed to achieve at that time.

 

Testing By Real-World Users

Until we get feedback from our users we don’t know if we’ve actually built something they can use. If the team have all done their best to create a WCAG compliant service, then it should be work for all our users. But, our users are the best judges of the quality of our service.

Our users can provide feedback on how well we’ve met their needs. For example, we could have built a WCAG compliant service that our users struggle to use. We should find that out as soon as possible in the development cycle.

How can we get our product tested by people with accessibility needs?

External Accessibility Audits

I highly recommend you get an external accessibility audit on your service. This allows expert users to use your service, find issues and suggest improvements. You should do everything you can to make your service accessible before an audit to get the best results.

I experienced an accessibility audit run by the digital accessibility centre (DAC) in Wales. I worked with their team as they tested key flows of the service. The team gave feedback on challenges, and suggestions on how we could improve.

In general, we did a good job which ratified some of our testing approaches. Observing the sessions and speaking with the team gave us more ideas we could add to our testing. Working with them was enlightening.


Testing By Users Within Your Organisation

You may already have internal users who are willing to help test your services. Many organisations have employee networks of people with different needs.

In my organisation, there is a disabled employees network (DEN). The people in the network have been really open to working with us. They have tested many services and provided invaluable input.

This doesn’t mean they do all our accessibility testing. The teams should be confident they’ve done everything they can to make an accessible service.

A checklist or process outline can be put in place to engage with the group. Respect their time and contributions.

 

Put People Over Technology

Accessibility is about people and not technology. That sounds obvious when written down.

After the audit, I reflected on our approach. The WCAG standard is a great resource and I agree with their mission,  but I concluded that the standard or our understanding of it was too technology focussed. We were so focused on “being accessible” that we forgot what the point was. This constrained our thinking. However, this changed after we worked with real users.

Recently I saw a comment on social media that provided a link to the WCAG 3.0 draft standard and  mentioned that “the 2.0 and 2.1 standards were too technology focussed.”  This reinforces the conclusion I had reached years before.

 

Increased Accessibility Focus In My Organisation

In the years since the accessibility audit, I’ve seen or been involved in more things:

  • Our accessibility community of interest has grown in numbers and types of roles.

  • There is a well established process to work with DEN users.

  • We’ve shared knowledge and testing artefacts with other government departments. 

  • Our testing community has shared training with the wider organisation.

 

Recommendations For Your Accessibility Testing Practice

Start small, and build an approach that works for you.

Come up with something that covers a few areas and build on it.

Get your service to real users when the team has done their part. Do it often. Their feedback is crucial.

Use a mix of automated tools, manual testing with heuristics, and audits. Create an approach to accessibility that works for your context.

 

For Further Reading

Many of the links I’ve included references to government work in the United Kingdom (UK hereafter). These will be relevant to people outside of government.

A fundamental principle of the GDS service standard is to “Make new source code open.” If you find something you like, reach out to the creators, star the repos, fork it, and raise pull requests. Everything we build is there for people to use and extend.

 

Author Bio

Aaron Flynn is working with delivery teams at HMLR in Plymouth, UK, where he’s lived the last five years. Originally from Dublin, he’s worked across the UK and Ireland. He’s passionate about communities, accessibility, and technical testing to name a few things.

He’s a community lead for the HMLR test community, works with other communities at HMLR, spoken at cross government meetups, and (of course) is active on MoT. He loves collaborating with people to share ideas and experiences.

You can find him on the Ministry of Testing club and Twitter. He periodically writes on his blog, thanks to the MoT Bloggers club.

 

Disclaimer

The views and opinions expressed in this article are those of the author. They do not necessarily reflect the official policy or position of HM Land Registry or that of the wider UK civil service.

Simple Tests for Accessibility
Partner Peek - Mabl - Accessibility Testing
Getting Started With Mobile Accessibility Testing - Ady Stokes
Selenium 4 introduces relative locators. This new feature allows the user to locate an object in relation to another object on the screen! Don't wait, get an instant demo today.
Explore MoT
Episode One: The Companion
A free monthly virtual software testing community gathering
A Software Tester’s Guide To Chrome Devtools
Learn how to dig deeper into the Web with the use of Devtools