Reading:
A Balancing Act: Finding A Place For Exploratory Testing
Share:

A Balancing Act: Finding A Place For Exploratory Testing

Learn how Elizabeth Zagroba balanced exploratory testing with other testing activities

We had automated “all” our tests

For the first year building our product, we went from zero customers to a handful of beta customers. The developers and testers on the teams built up a suite of hundreds of automated tests by writing new tests for each story. Testing stories involved limited exploration, because the roadmap for the product wasn’t clear. Any bugs or edge cases we found by exploring would just be tossed in the backlog to languish as we practiced demo-driven-development. Our release testing strategy was: run all the automated tests, work through the failures until they all turned green. We believed this proved we hadn’t broken anything. 

We got new & more stakeholders

People around the company started to have a stake in what we built. We worked on a search feature that integrated with the desktop team and the web team. We spent a few weeks hashing out the API specifications with the desktop team, then didn’t show our work for months while we built and tested. 

We released the search feature. That’s when sh*t hit the fan

It turned out the web team had different requirements from the desktop team. They expected an empty search to return the most popular items, instead of an empty list. They expected results if they searched in the middle of words, while we were only indexing results from the beginning of words. The desktop team had a filter that defaulted to not showing any search results. Automation could have caught this behavior, if a human had told it to expect something different.

Having a strong focus on automated tests didn’t work for this point in our product lifecycle, when more teams were integrating with us and we were headed out of beta and towards general availability. Building and debugging automated tests took time away from thinking about how our products worked together.

We had a reckoning

As Winston Churchill said, “Never let a good crisis go to waste.” The web team realized they needed to state their expectations, and started doing so. The desktop team asked for more updates while we were building.

But what should the developers and testers on our team do differently? The developers were afraid to stop writing tests and start thinking like testers, because they knew they weren’t good at that. The developers felt more effective when they wrote code, but they struggled to decide what to automate and debug existing tests. Another tester on the team worried about what their daily work might look like if we “forced” developers to do exploratory testing.

We removed some automated testing, and we’re better for it

We’re posting a high-level view on our testing in a shared Slack channel as we’re working on stories that affect the other teams. We were expecting additions or suggestions; instead the response has been an increase in trust and excitement about our team’s work.

We’re releasing more frequently in smaller chunks. We’re addressing the bugs (features are in the eye of the beholder) that the other teams found in our first big release. But we’re not fixing all of them. We’re setting expectations around what is possible for our team to accomplish. (Did you know there’s still a global pandemic?)

We revisited the test strategy for our team and consciously decided to automate less. The time saved in writing and maintaining the tests is used to keep finding answers to the question: what’s the risk here?

Many of our stories still have some automated component. In feature testing, we look deeply into a feature and ask the tricky questions. In release testing, we only want to make sure that the branch we already tested got merged. We started making a checklist of stories that didn’t have an automated component, and wrote charters that described where we should start investigating:

See what happens if the description on the detail page is empty.

See if a regular user without admin rights can search for {publicly available data}. 

See if you get a pop-up error message when you search for weird characters.

Where you can try it

Find a test that always passes. Ask if it would tell you important information if it failed, or if you’d work to fix it. Watch João’s talk about deleting tests.

Consider your next user story. There are things you could automate, but do they need to do the setup part, the executing part, and the asserting part? Could you write something to set yourself up for more thorough exploratory testing? Watch Richard’s talk about redefining test automation. 

Don’t force your developers to exploratory test. Start with the feedback you’re already providing on each story. Get on a call to see which items matter and which don’t. Get on a call, but have a quick question for them too. Get on a call, and make sure you understood one of the requirements correctly. Get on a call so they can help you set up your configuration, or figure out what this log message means, or help you debug. Seeing enough examples of how you’re thinking and making decisions will start to rub off on them.

Elizabeth Zagroba's profile
Elizabeth Zagroba

Quality Lead

Elizabeth is Quality Lead at Mendix in Rotterdam. She reviews and contributes code to a Python test automation repository for 15+ teams building Mendix apps across three units. She builds exploratory testing skills by asking pointed questions throughout the unit, facilitating workshops, and coordinating an ensemble (mob) testing practice. She injects what she learns from conferences, books, and meetups into her daily work, and spreads her knowledge through the company-wide Agile guild she facilitates. She's presented at conferences throughout North America and Europe, and co-organizes the Friends of Good Software conference (FroGS conf http://frogsconf.nl/). She coaches people to success when possible, but isn't afraid to direct when necessary. She's the go-to person for things like supporting new presenters, reviewing documentation, navigating tricky organizational questions, and thinking critically about what we're building. Her goal is to guide enough testers, leaders, etc. to make herself redundant so she can take on new and bigger challenges. You can find Elizabeth's big thoughts on her blog (https://elizabethzagroba.com/) and little thoughts on Twitter @ezagroba.



Rediscovering Test Strategy - Mike Talks
Failing for the Right Reason - a Fresh Look on TDD
The Deadly Sins Of Acceptance Scenarios – Mark Winteringham
The Surprising Benefits of Exploring Other Disciplines and Industries - Conor Fitzgerald
Testability Methods to Enhance Exploratory Testing - Ashley Graf
What Is Exploratory Testing? An Iterative And Collaborative Learning Technique For The Whole Team
Milestones with Diogo Rede
99-Second Introduction: What is Testing Time Estimation?
From Strategy To Execution In A Lean And Effective Way
Explore MoT
Episode One: The Companion
A free monthly virtual software testing community gathering
Cognitive Biases In Software Testing
Learn how to recognise cognitive biases, explain what they are and use them to your advantage in your testing