The UI Test That Cried Wolf

23rd February 2021
  • Locked
Alyssa Burlton's profile
Alyssa Burlton

Head of Engineering Effectiveness

The UI Test That Cried Wolf image
A free account is required to view this content
Talk Description

Automated UI tests give you confidence that whole areas of your system are playing nicely together. But they also come at a price, and it's not just that they're slower than other types of tests. Worse than that, they can also be unreliable - I'm talking about "that test," which occasionally falls over when nothing is wrong.

This flakey behaviour is by no means unique to UI tests, but they are much more susceptible to it. By definition, they are covering many areas of your system at once, meaning there's a bigger set of moving parts in which something can go wrong. Mis-fired requests, elements stealing focus at the wrong moment, variable loading times - it can be a minefield. And when a test does start acting up, diagnosing and fixing it can be even more awkward.

So what do we do? The first important step is to acknowledge that these flakes are inevitable, just as bugs in production are. By embracing failure and investing in observability, we can ensure they are as easy to diagnose and quick to fix as possible. In this talk, I will demonstrate some strategies for achieving this, as well as outline why it is crucial to do so for your team's productivity.

This Masterclass was kindly sponsored by TestRail. TestRail is a test case management platform that helps engineering teams plan, organize, execute, track their testing more efficiently. More than 10,000 teams at organizations like NASA, Atlassian, Apple, Microsoft, and AutoDesk use TestRail to manage their testing and QA at scale. Our platform is well-loved by testers and developers alike because it is fast, flexible, and easy to use. With TestRail, you can integrate with Jira (or 20+ other tools), track both manual and automated test results, and get real-time visibility into the progress of your testing. Try TestRail for free.

What you’ll learn

By the end of this masterclass, you'll be able to:

  • Avoid common anti-patterns used to ineffectively fix flakey tests.
  • Increase observability to identify the cause of flakey tests.
  • Recognise the benefits of having a process the whole team can use to diagnose and fix flakey tests.
Alyssa Burlton's profile'

Alyssa Burlton

Head of Engineering Effectiveness

Alyssa is an Engineering Effectiveness Lead from Leeds in the UK, working at Glean. She has worked in development for 8 years, and over that time focused on writing easily deployable, testable software. An infrastructure as code fan, she writes repeatable architecture scripts and loves teaching others. Outside of work, she’s a Taskmaster fan, and has even has made it into the official #HomeTasking compilations! She also enjoys sailing, trampolining and a good ale.
Suggested Content
World Without WebDriver? Automated Test Strategy for Modern Web Applications - Bart Szulc
QMetry Product Demo
Approach to Comparing Tools with Shweta Sharma
With a combination of SAST, SCA, and QA, we help developers identify vulnerabilities in applications and remediate them rapidly. Get your free trial today!
Explore MoT
TestBash Brighton 2024
Thu, 12 Sep 2024, 9:00 AM
We’re shaking things up and bringing TestBash back to Brighton on September 12th and 13th, 2024.
The Complete Guide To CSS Selectors
Learn how to create robust CSS selectors for your automation and much more...

Tags

  • ui-automation