Cultivate Your Credibility With Oracles And Heuristics
By Lee Hawkins
The conflict of sharing exploratory testing problems
As testers, we want to find and describe what we believe are important problems. These are problems that may threaten the value of the products we test. Stakeholders might be reluctant to hear about problems because surfacing them might put an existing project schedule under pressure and lead to difficult conversations for a project manager. Some developers might feel personally attacked, especially if we report problems in terms of which area of the code appears to be at fault rather than how these problems threaten value.
There’s a conflict between our desire to share useful information and grabbing the attention of those we share it with. It’s much easier for stakeholders to discount problems if we just rely on personal opinion as the basis for sharing problems. This often leads to an anti-pattern where testers no longer report important problems and instead only report problems they know stakeholders are more receptive to.
How might you resolve such a conflict? Build credibility!
What is “credibility” and why does it matter?
“Credibility” is the quality that somebody or something has that makes people believe or trust them. “Credible” is an adjective that comes to us from the Latin credibilis, meaning “worthy to be believed.” Credibility matters because you have less influence when people are less likely to believe you.
You need credibility if you want people to take the information you provide seriously. Credibility helps establish your authority, especially if you back it up with solid evidence rather than just personal opinion. By doing so, you demonstrate why someone should trust and believe in you and your ability.
One way to enhance your credibility as a tester is to become skilled in applying oracles and heuristics.
Detach observation from opinion with oracles & heuristics
I’d been a tester for about eight years before I heard of an “oracle” – while attending the Rapid Software Testing course with Michael Bolton back in 2007 – and I’d honestly not really given much thought to how I found problems in the products I’d tested before then. When asked how I spotted potential problems, I’d often say things like “based on my experience of the product” or “by comparing the product against the specification”.
An oracle is a way to recognize what might be a problem
Whenever we spot something we believe to be amiss, we are – consciously or not – referencing some oracle to allow us to make that observation. As an example, we can detect a bug when the product behaves in a way that is inconsistent with its specification or user story – the oracle here is the specification or user story and the problem is that the state of the product is not consistent with that oracle. The problem might be caused by an out-of-date specification, a genuine bug in the product, or something else – or there might actually be no problem at all.
I found myself deliberately using oracles soon after I encountered them, but my journey towards understanding and helping others learn about heuristics was a longer one. I started off favouring the “heuristic is a rule of thumb” definition but it didn’t lead me to genuine understanding or good application of heuristics during my testing. The formal definition of a heuristic commonly used by other testing practitioners seemed to be “a fallible method for solving a problem or making a decision” and this was helpful - especially in highlighting that they aren’t guaranteed to work. But I’ve eventually come to the definition of what a heuristic means to me in the context of software testing.
A heuristic is a way to help me come up with test idea
When tasked with testing something new, I don’t necessarily know how to unearth interesting test ideas and so following rules probably doesn’t help me. Under such conditions of uncertainty (which are normal in software development), I look for methods or ways of coming up with test ideas that might work, while acknowledging that they might not - these are heuristics.
An oracle is a special kind of heuristic which helps you identify and describe a problem. Yet no oracle is definitive and they should only ever be thought of as providing you with a pointer to a possible problem.
Returning to the previous example of using a user story as an oracle, an inconsistency between the product and the user story might not actually be a problem in the product - the user story may have fallen out-of-date or a deliberate pivot was made to the preferred functionality of the product.
You’ll likely build up your own toolbox of heuristics to draw from as you become more familiar with them and realise their power. For example, I often use a consistency heuristic where I expect each element of a product to be consistent with comparable elements of the same product – one such approach from Michael Bolton’s FEW HICCUPPS heuristic.
Using Trello recently to manage a couple of side projects, I noticed that there is an inconsistency in the way fields on cards can be edited. Clicking in the Description or Tasks fields puts the field straight into Edit mode, whereas to edit a Comment requires an explicit click on the “Edit” action. This is a good example of the Trello product being inconsistent with itself in terms of editing fields on a card – and note that the oracle in this case is the product itself.
Why using oracles and heuristics helps build credibility
Imagine you were reporting the observation about Trello and why you believe it’s a problem. You could just say the following:
“Comments should enter edit mode when clicking in the field”
But this type of report sounds like a personal opinion to a stakeholder. Instead, take the conscious step to understand which oracle you are using to spot a potential problem. In doing so you move from personal opinion towards proof. This is a key step to enhancing your credibility in the eyes of your stakeholders.
Your reports are more credible when stakeholders clearly understand why you are claiming your observation to be a potential problem. So, in this case, you could say something like:
“Entering edit mode for Comment fields is inconsistent with that for the Description and Task fields on a card.”
You should still feel free to express personal opinion too, but make it clear when you do so:
“I enjoy the convenience of entering edit mode by simply clicking in the Description and Tasks fields, and find it confusing that Comments work differently.”
We shouldn’t hold back from sharing our ideas and feedback, but it’s important to do so by detaching observation from opinion.
You’ve learned how using oracles and heuristics can lead to increased credibility and here are some suggestions for you to put these learnings into practice:
Try to use the Product consistency heuristic (per the example I gave in Trello) during your testing of a product to identify potential problems. Remember to report the problems you find with reference to the oracle you’ve used.
Try using another heuristic to help you come up with different test ideas - take a look at FEW HICCUPPS (from Michael Bolton) or the Test Heuristics Cheat Sheet (from Elisabeth Hendrickson, James Lyndsay, and Dale Emery).
See if you notice any difference in the way your stakeholders react to the problems you report when you leverage oracles and heuristics to separate observation from personal opinion.
James Bach and Michael Bolton have both written extensively on the topics of oracles & heuristics and I highly recommend their work. Here are a few places to start in exploring their work in these areas:
Expected Results blog post by Michael Bolton
About the Author
Lee Hawkins has been in the IT industry since 1996 in both development and testing roles, He has spent most of his career helping Quest Software teams across the world to improve the way they build, test and deliver software. Lee considers that his testing career really started in 2007 after attending Rapid Software Testing with Michael Bolton.
Lee was the co-founder of the TEAM meetup group in Melbourne and co-organized the Australian Testing Days 2016 conference. He was the Program Chair for the CASTx18 testing conference in Melbourne and also co-organized Testing in Context Conference Australia 2019. He is a co-founder of the EPIC TestAbility Academy, a software testing training programme for young adults on the autism spectrum.
He is a frequent speaker at international testing conferences and blogs on testing at Rockin' And Testing All Over The World. When not testing, Lee is an avid follower of the UK rock band, Status Quo; hence his Twitter handle @therockertester.
The Principal Test Architect for Quest Software, based in Melbourne, Australia, Lee Hawkins is responsible for testing direction and strategy across their Information Management business. In the IT industry since 1996 in both development and testing roles, Lee’s testing career really started in 2007 after attending Rapid Software Testing with Michael Bolton. Lee was the cofounder of the TEAM meetup group in Melbourne and co-organized the Australian Testing Days 2016 conference. He is a co-founder of the EPIC Testability Academy, a software testing training programme for young adults on the autism spectrum. Lee was the Program Chair for the CASTx18 testing conference in Melbourne. He is a frequent speaker at international testing conferences and blogs on testing at <a href="https://therockertester.wordpress.com/" rel="noopener nofollow">Rockin’ And Testing All Over The World</a>. When not testing, Lee is an avid follower of the UK rock band, Status Quo; hence his Twitter handle @therockertester.