The Value Of Pessimism In Software Testing

Insights From "Lessons Learned in Software Testing" 

by Barry Ehigiator

Pessimism is defined as the tendency to emphasise or see the bad side of things rather than the good, or believe that bad things are more likely to happen than good things. In a world where many psychological and self-improvement advocates constantly stress the value of positive thinking (for good reasons), having a pessimistic attitude is often considered a negative trait. This should not be the case in software testing, as some research has shown that a particular type of pessimism known as defensive pessimism can actually be helpful. 

 

Defensive Pessimism Defined

Defensive pessimism is a strategy for harnessing doubts and anxieties as motivation for better performance. A crucial aspect of defensive pessimism as Norem and Cantor (1986) noted in their pioneering study involves, among other traits, setting low expectations for the outcome of a particular situation.  This may help individuals prevent risky situations and can help improve performance if they are encountered. For example, you might expect that your favourite device will not work as expected under certain conditions, then you imagine  the factors that would bring about those conditions. This practice helps to prevent the expected mishap from happening. Software testing is indeed one of the areas where a defensive pessimist can be of value. 

 

The Defensive Pessimist Mindset In Software Testing

A mindset is the internal lens through which we see and navigate life (Reed & Stoltz 2011). It influences our attitude - the way we understand what is going on, and how we respond to things. Mindset matters, because it helps us spot opportunities, but it can also entangle us in self-defeating cycles. 

In software testing, practicing defensive pessimism helps you gain vital knowledge about your product, contribute to quality improvement during development, and eventually provide value to your customers. For example, having a pessimistic attitude towards the quality of your product pushes you to question every aspect of its development (from concept inception to delivery), the released product, and the way it is used by customers. Also, it pushes you to search diligently for underlying issues in the application under test (AUT). You can then advocate to get the observed issues fixed very early on in the development process. Furthermore, a pessimistic attitude towards your product’s quality as Kaner et al (2002) extensively detailed in "Lessons Learned In Software Testing"  can help ensure that, in your role as a software tester, you put the lessons that follow into practice.   

 

Question Everything, But Not Necessarily Out Loud (Lesson 7)

To carry out effective and worthwhile testing, you should become skilled at questioning assumptions. You can of course test without questioning assumptions: when executing pre-documented test cases, running automated regression tests, testing specified happy paths, and so forth. However, in those situations, you more or less test without the cognitive, interactive, and intellectual aspects of the test process. It is difficult to fully understand your product, the logic behind its functionalities, its integration with other systems, how your clients will use it, or figure out edge "use cases" without proper cognitive engagement with the product. Thus, without questioning, it is not possible to test well. 

Conversely, when you entertain doubts about the quality of your product, you ask questions: to yourself, the people involved in building the product, or those who are interested in its quality and performance. Asking questions in this manner stimulates your thoughts in directions that help you gain more information about the AUT. Thereafter, what you do as a tester is to harness the knowledge you gathered from the questioning exercise to focus your testing effort. It is well documented that efficient testing processes contribute tremendously to increased quality of released products. Thus, in your role as a tester, you position yourself rightly to contribute to your product's quality improvement and your company’s success when you approach your product with a pessimistic mindset. 

 

Focus On Failure So Your Clients Can Focus On Success (Lesson 8)

In any industry, launching a new product is challenging, because several things can go wrong. The most frequent example: software failures – where a software product fails to perform its intended functions within specified requirements, or when it exposes users to certain security vulnerabilities. 

In recent history, the list is endless. Software failures have wreaked havoc at banks, airlines, and many big tech companies. These failures usually occur when a software defect that should have been caught during development reaches the end user. As a tester, developing a mindset that focuses on failure during product development typically improves your chances of finding an expected or related failure in your product. 

Since defects tend to cluster together, it’s a good idea to look twice at features related to other features in which you already found defects. Insights from this type of scrutiny can help focus your test effort, improve your testing, and enhance your product’s risk coverage during the development phase. Without a doubt, finding key problems or product risks in the first place will depend on your creativity, skill, and experience. However, understanding failures and focusing on them early on is crucial, since your customers' observations and review of your product will reflect the number of failures they encounter. Finding defects and the anticipated failures they could cause during development will help ensure they are fixed, and fixing them will ensure your users do not have to find them for you.

 

Acknowledge You Will Not Find All The Bugs (Lessons 9 and 10)

Many software systems today are like living systems. Once developed, they become self-organizing systems that have the intrinsic characteristics of life, including their own internal logic or, you could say, cell structure. With any such system comes inherent implicit and explicit vulnerabilities, some of which might have been introduced during development. Others could be triggered by specific scenarios when information and material input or data exchanges occur. Thus, no matter how hard you try, you will never find all that could go wrong with a software product. 

As someone responsible for testing in your organization, you should be aware that you will not find all the bugs in your product. Similarly, running your test execution scripts to completion is not the same as testing your product completely. You are better off acknowledging these truths and you should note worthwhile scenarios that you could not cover in your testing for one reason or another. When you convince yourself that there may be undiscovered defects in your application, more often than not you will find those defects. This is because, from a psychological standpoint, our mind is often more interested and driven to the things we do not know, rather than the things we already know. Therefore, if you consider defects as things not known, acknowledging the existence of that unknown can drive your curiosity in ways that lead you to find the defects. 

Harnessing the power of the defensive pessimist approach to your product early on in your development lifecycle will encourage you to focus on using risk analysis, prioritization, and selection of adequate test design techniques to focus your test efforts.

 

Understand That "It Works" Really Means It Appears To Meet Some Requirement To Some Degree (Lesson 34)

In reporting the "quality status" of an AUT, it is common for a tester to say, "I have tested it, and it works." This can, in a variety of ways, be a misleading report to your team or stakeholders because the comment "it works" on its own is a vague statement. For example, you may be inferring that an AUT works as designed. But that does not tell your audience the conditions under which the AUT works. Neither does it mean it will perform well in the hands of your users. 

Alternately, a pessimistic mindset can drive you to question what it truly means to say, "it works." It can push you to ask and attempt to answer questions such as: what is it that works? What part of the product was observed? What functionality or test type was performed? How well and to what degree did the test pass? Under what conditions or circumstance does the AUT work? etc. Your answers to such questions can help you translate the statement "it works" into "it appears to meet some requirement to some degree." 

Highlighting the requirements that the AUT fulfils, and perhaps giving details of the scenarios you have observed, will provide a richer and better assessment of your product's quality when communicating your test report to others. This is important because on the one hand, your team and management rely on accurate status information to make both development and business decisions, that ensures your organization ships quality products to your customers. On the other hand, it helps management to understand the purpose and importance of QA and testing within the business.

 

Remember You Are Harder To Fool If You Know You Are A Fool (Lesson 40)

In the words of renowned psychologist Dr. J.B. Peterson, "the willingness to be a fool in the land of the stranger is an act of courage and the precursor to transformation." Every learning process is propelled by courage. You do not develop unless you start something new, and you start nothing new unless you are courageous and accept the reality of that which you do not know. 

Broadly speaking, no one likes to be fooled nor think of themselves as a fool either. However, as a tester, when your senses and reflexes are awakened to the things you do not know about your product, or to the idea that you are easy to be fooled by a product or a buggy feature, you get a little more alert. You become more attentive and you put your mind to work harder over the details of your test strategy. For example, you might test a feature that fails or passes your test randomly depending on the input data used. You immediately start to pay attention to the data and spend a little more time on that feature each time you test it. This is because you would want to discover the root cause of any defects you might find. Such attention to one or more features will often result in your covering of test scenarios you could have missed due to complacency. 

By approaching your product in the manner described above, you increase the possibility of finding existing defects and ensuring they get fixed early in the development process.

 

The Software Tester As Defensive Pessimist

When it comes to software testing, as Kaner et al (2002) noted, "the way you think, make test design choices, your attentiveness to details, ability to interpret what you observe and constructively communicate your observation to your team and stakeholders makes the difference between excellent testing and mediocre testing." This is not trivial, as it is this mindset and cognitive aspect of testing that distinguishes you, a sapient living being, from any machine or automated testing tool.

In my experience as a software test professional, I have seen that one of the ways to garner respect and value from your team, and management is to keep yourself "in form" and informed about the products you work with. You cannot achieve this without having adequate knowledge about the products, a reason why "information gathering" is key to effective testing. You gather information through the interactive, cognitive, and intellectual process of engagement with the products you test with respect to their fulfilment of specified business and customer requirements. It is hard for anyone to achieve this without constant questioning and critical assessment of the product – the specifications, designs, implementation choices, development processes, and assumptions made about the product throughout the development lifecycle.

In a nutshell, when you are pessimistic and expect that something is likely to go wrong with your product, you get inclined to focus your test effort keenly on the areas most vulnerable to failure. Eventually, through a critical evaluation of your product, you learn more about it. Moreover, you become better equipped to provide meaningful and substantial information about it. This can help contribute tremendously to efforts at improving your product's quality and your team's deliveries in general. This is not to say that you should never have faith in your product, but instead to emphasise the need to be critical about its quality. After all, what you and your team truly want to achieve is to ship products that bring  optimal value to your customers. 

In the end, being a pessimist, or having a pessimistic attitude to quality is not necessarily a negative thing. Neither should it be considered a bad quality, especially not in software testing. However, you may irritate many developers and stakeholders alike if you are always the bearer of bad news about the perceived quality of your products – the "pessimist" so to say. Nevertheless, it is what you do with your pessimism and how you communicate its outcome to your team that matters in the final analysis. Because the value of knowing the shortcomings of a product and getting them fixed or addressed before release to your customers cannot be overemphasised. 

Disclaimer: This article should not be interpreted in any way as an attempt to provide a comprehensive analysis of the topic or concepts used, rather, one that simply highlights some of the ways in which defensive pessimism on the part of a tester can help improve their testing. Good software testing is, indeed, a challenging intellectual process, one that is complex and multifaceted across industries. It is, therefore, imperative to note that "the value of any test practice or concept such as those mentioned in this article depends on their context." Context matters: the value you get from your pessimism will be determined only by your context.  

 

Further Reading

 

Author Bio

Barry Ehigiator is a QA Engineer and Test Lead at TrackMan, working on products designed  for golf data tracking and visualization. His passion is in helping people learn how to develop a tester mindset, and become better testers. He helps teams build a quality culture based on team responsibility, collaboration, and the implementation of a well-managed context-driven testing.

Barry is an avid learner, who is driven by engaging conversations and learning opportunities. When he is not advancing quality testing and systems, Barry is expanding his curiosity on international affairs, society, people and culture.  He also enjoys soccer, video games, and exercising. 

Linkedin | Barry Ehigiator
Twitter | @lord_beh