The UI Test That Cried Wolf

23rd February 2021
  • Locked
Alyssa Burlton's profile
Alyssa Burlton

Head of Engineering Effectiveness

The UI Test That Cried Wolf image
A free account is required to view this content
Talk Description

Automated UI tests give you confidence that whole areas of your system are playing nicely together. But they also come at a price, and it's not just that they're slower than other types of tests. Worse than that, they can also be unreliable - I'm talking about "that test," which occasionally falls over when nothing is wrong.

This flakey behaviour is by no means unique to UI tests, but they are much more susceptible to it. By definition, they are covering many areas of your system at once, meaning there's a bigger set of moving parts in which something can go wrong. Mis-fired requests, elements stealing focus at the wrong moment, variable loading times - it can be a minefield. And when a test does start acting up, diagnosing and fixing it can be even more awkward.

So what do we do? The first important step is to acknowledge that these flakes are inevitable, just as bugs in production are. By embracing failure and investing in observability, we can ensure they are as easy to diagnose and quick to fix as possible. In this talk, I will demonstrate some strategies for achieving this, as well as outline why it is crucial to do so for your team's productivity.

This Masterclass was kindly sponsored by TestRail. TestRail is a test case management platform that helps engineering teams plan, organize, execute, track their testing more efficiently. More than 10,000 teams at organizations like NASA, Atlassian, Apple, Microsoft, and AutoDesk use TestRail to manage their testing and QA at scale. Our platform is well-loved by testers and developers alike because it is fast, flexible, and easy to use. With TestRail, you can integrate with Jira (or 20+ other tools), track both manual and automated test results, and get real-time visibility into the progress of your testing. Try TestRail for free.

What you’ll learn

By the end of this masterclass, you'll be able to:

  • Avoid common anti-patterns used to ineffectively fix flakey tests.
  • Increase observability to identify the cause of flakey tests.
  • Recognise the benefits of having a process the whole team can use to diagnose and fix flakey tests.
Alyssa Burlton's profile'

Alyssa Burlton

Head of Engineering Effectiveness

Alyssa is an Engineering Effectiveness Lead from Leeds in the UK, working at Glean. She has worked in development for 8 years, and over that time focused on writing easily deployable, testable software. An infrastructure as code fan, she writes repeatable architecture scripts and loves teaching others. Outside of work, she’s a Taskmaster fan, and has even has made it into the official #HomeTasking compilations! She also enjoys sailing, trampolining and a good ale.
Suggested Content
The Joy of Record and Playback in Test Automation - Louise Gibbs
TestComplete with Charles Penn
Experience Reports - C# Edition
🕵️‍♂️ Bring your team together for collaborative testing and start for free today!
Explore MoT
Episode Four: The Practitioner
The Testing Planet is a free monthly virtual community gathering produced by Ministry of Testing
The Building Blocks of the Internet
Learn the fundamental technologies that make up websites and web pages


  • ui-automation