Reading:
Ten Reasons The Whole Team Owns Defects
Share:

Ten Reasons The Whole Team Owns Defects

Read "Ten Reasons The Whole Team Owns Defects" by Kate Paulk from Ministry of Testing's Testing Planet

Content in review

We're All Responsible For The Product

By Kate Paulk

Nobody Can Stop All The Bugs

In a perfect world, everyone on the team regardless of role works together to produce the best possible product they can build and test in the time available. This is the ideal we all strive for.

Unfortunately, the world isn't like that. People make mistakes, or have bad days when they struggle to just get through the day. Or maybe there was just a misunderstanding along the way. Whatever the reason, defects slip through to customers. 

This doesn't mean that anyone should be blaming anyone else. Almost every developer, tester, project manager or “whatever other role you call it”, that I've met has been a mature, responsible person who cares about the product being built. It's pretty well known that blame leads to poorer outcomes because of the focus on problems being someone's fault instead of problems being things that need to be dealt with. Things can go wrong even in the best possible situation.

The simple fact is that any sufficiently complex software will have technical problems. Similarly, the users of any sufficiently complex software will find things they don't like - which may or may not be technical problems.

1: Design Is Often A Best Estimate

The average tester (or developer) or even the superlative tester may not even be involved in design discussions for new features. I've been in this situation many times, and the only consistent solution I've found is to ask a lot of questions when I start reviewing a new feature. Sooner or later the people making the design decisions come to the conclusion that it costs more in time and effort when they don't include the people who are most familiar with existing software early on.

Personal Experience: 

A new application portal my employer was developing had been in prototype, design, and development for some months when I was assigned to test the new portal. I uncovered serious security flaws due to the team not being aware of the different classes of users that needed to be supported by the portal and application suite, and the differing security access rules relating to each class. The release of the new portal was delayed by 12 months due to the extensive re-engineering needed to ensure that there was no way a user could accidentally access another person's data. 

Since that incident, I have been asked to contribute to early design discussions more often, leading to fewer instances of design oversights causing major rework.

If the team isn't aware of what is being designed until tasks land in their "do it now" queue, they have no way to ask about potentially problematic interactions between different modules before the software is developed, which means there's a greater chance of design-related defects slipping through. As I said above, the only real solution to this problem I've found is to ask questions which prove that you, as a subject-matter expert, are a valuable resource at design-time.

2: Coding Is a Wicked Problem

While many testers are extremely good at finding problems with how software is coded, that doesn't mean a given tester will find all the problems, or even that they can find all the problems. 

Similarly, the complexities of typical modern software means it can be next to impossible for developers to avoid creating defects. It might be that a new feature isn't compatible with an existing feature, or that everyone missed something in design, or even that what the customer expected doesn't match what they actually received, despite collaboration. 

If that isn't challenging enough, many testers don't have the opportunity to work closely with the developers. It's not rare for the test team to be an outsourced group that doesn't see the software until the developers have finished with it. For that matter, it's not all that rare for an in-house software test team to have no chance to see the software until the developers have finished with it. 

Similarly, the complexities of typical modern software means it can be next to impossible for developers to avoid creating defects, even using agile practices. Problems can be reduced by using one or more of a number of well-known quality-boosting techniques. Some examples: 

  • Pairing, which helps to prevent problems before the code is integrated,
  • Code reviews to help catch problems before they reach a manual tester, 
  • Writing unit tests to cover the business logic and help protect against regression issues, 
  • Using stubs and mocks to isolate targeted functionality by replacing depencies,
  • Partial feature releases, whether in beta form or not.

Choosing a mix of strategies and techniques to improve the code quality across the whole team helps to reduce potential defects. It can minimize problems caused by communication issues, even in split teams.

3: Fixing Defects Is Harder Than It Looks

Testers rarely if ever have responsibility for fixing software, or even for ensuring that software gets fixed. Despite this, there is a tendency for people to blame testers for buggy software. I'm not sure why it happens, but I suspect it has something to do with the impression I've seen from many people that writing software is like a manufacturing process. This analogy puts testers in the quality control position of inspecting the final product and prevent bad parts from being released. Software development doesn't work that way, and "replacing bad components" is a lot more complex than changing out a malfunctioning part in a car.

While defect remediation processes vary across different organizations and sometimes within the same organization, there is a general process that usually happens when a tester finds a defect. The process is typically something along the lines of:

  1. Report the defect, whether in a bug tracking tool or by notifying a developer or project manager.

  2. Describe or explain the defect. This is typically a detailed description in a tracking tool, and includes such information as the likely consequences of not fixing the defect, the tester's estimated risk of the defect occurring in production, which can be anything from a reasonable estimate based on known facts, to a SWAG ("scientific wild-a$$-guess"), reproduction information, and any other evidence the tester feels is needed.

  3. Sometimes developers will do extra research and add their estimation of consequences and cost to fix.

  4. Someone, often a triage team or a product or project manager, reviews the defect and decides how important it is to fix to fix the defect. Depending on the organization this may or may not involve input from a senior developer or a tester.

  5. If the defect is considered important enough, it gets fixed. How soon it gets fixed will depend on how important it is to the organization. If other programming tasks have a higher priority, it could be years before the defect is scheduled for correction.

Personal experience:

In my current workplace, non-critical bugs will often take 5 - 6 months to be scheduled. I've lost count of how many low priority bugs I've seen scheduled two or more years after they were reported. I am currently testing a partial fix for a bug I reported two and a half years ago - and that is a fix for a security vulnerability. It's taken this long because the application I work with is massive and there are only four programmers plus me to maintain it. It's understandable that bugs can take a long time to fix in this situation, but it's still frustrating

Testers, like developers, typically don't decide how important a fix is to the organization. We can say how important we believe a fix should be, but the decision could be made further up the management chain.

Enough situations where a bug is reported again by a customer can help testers and developers get bug advocacy taken more seriously. One practice I've found helpful is to mention which customers would be affected by a particular bug, starting with the largest and/or most influential.

Personal experience:

The desire to perform some version of the "neener neener told you so" dance can be very strong when a defect that was dismissed as unimportant or "the customer will never do that" causes havoc because the customer did "that". 

On one occasion I was testing a massive new project for which the primary customer was being provided regular beta builds. I'd found a problem where a specific set of actions could cause a data pump to fail with a referential integrity error. This problem was dismissed with the words "the customer will never do that".

Two days later, during their beta testing, the customer managed to cause a data pump failure. I talked them through fixing their data, described what had happened and why, and let them know we had observed the problem but the team hadn't had time to schedule a correction.

While my work did not fix the issue, it did allow the customer to continue with their beta testing and gave them a workaround they could use to prevent the problem recurring. The event also taught me to mention which high-profile customers could potentially be affected by bugs I found when I created my reports. 

As a general rule, the most successful bug advocacy I've found uses a combination of techniques, including:

  • Stating which high profile customers (if any) are likely to be affected by the bug, 
  • What workaround(s) I've found, if any, 
  • How much extra effort the workaround is, 
  • Whether and how much the bug impacts customer data, 
  • How easy/difficult it is to induce the bug, 
  • If necessary, demonstrating the bug in action whether personally or by way of screen capture videos. 

All of this information can be included in a bug report.

If there is still disagreement over whether or not the bug should be fixed, or over whether the bug should be prioritized as must fix immediately,  the next step is to involve the team leads or managers using the information in the bug report. The key thing to remember, when escalating bug advocacy, is that the leads or managers have a different perspective than the team, and may decide that the bug is not one that needs to be fixed immediately. When this happens, remember reasons they mention, and use that information as the criteria to escalate the next time there is an important-to-you defect.  

4: Teams Don't Decide What Gets Fixed

Whether a bug gets fixed can depend on a lot of decisions. Many testers can recognize when a bug they report is not going to be fixed any time in the near future. When this happens, testers make the report to ensure the bug is part of the team knowledge base about the software. 

Some of the factors affecting whether a bug is fixed include:

  • Whether it can be reproduced consistently. Intermittent bugs can be very challenging to track down. If they don't happen often, or if the impact isn't all that significant, an intermittent bug could well be prioritized so low that it never gets fixed.

  • Whether the time that would be needed to fix the problem is worth taking or worth the expense. This is particularly common with intermittent edge cases where there is a relatively easy workaround. I've also seen bugs that crash the system unaddressed because they would be too hard to fix and they happen so rarely that it's unlikely customers will encounter them.

  • Other priorities and how important the bug is by comparison. I've seen high priority bugs left unfixed for months because there were too many critical priority tasks which needed to be done before anything high priority could be considered.

In my workplace, after a defect is reported it is investigated by the team. The decision of whether or not to fix is usually made by a manager after consulting with stakeholders and the team. It's not rare for the fix/do not fix decision to have nothing to do with whether the reported defect is actually caused by flawed code. 

Personal experience:

In my career, I have seen customers report bugs for cases such as:

  • Known issues with operating systems or browsers,
  • Problems caused by running the software in an unsupported environment,
  • Running evaluation-only betas in a live environment - despite warnings that the software was not ready for production,
  • Features the customers knew the software did not have,
  • And many other issues that were not in any way a problem with the software.

While the team can advocate for a problem to be fixed, as I mentioned in point 3, the decision on whether a problem will be fixed is usually out of a tester’s hands.

5: Testers Can't Test Everything

In any non-trivial software, there are effectively infinite possible paths through the system. I think of pathways through software as more or less equivalent to a street map. It's possible to know every street in the town, but there's no way to map every single way to get from one address to another. There's really nothing to stop someone looping around a block any number of times, or from taking strange detours. The same could be said of most modern software packages. 

Testers typically try to decide what the most critical paths are going to be and test those as much as possible. There are always time constraints, so the less critical paths are not going to be as thoroughly covered as the most critical paths.

In most applications, there are too many potential paths through an application to test all of them. In addition, testers need to consider all of the following possible combinations:

  • The number of configuration options and how they combine
  • The number of potential combinations of
    • Hardware
    • Operating system
    • Other software in use
    • Resource constraints

Because there are so many potential ways for the software in test to interact with the user and its environment, most testers will use some form of risk-based evaluation to cover the areas with the highest risk first.

Some of the tools and techniques that can be used to deal with this and gain the most effective coverage in the time available include risk analysis. To focus on the most likely problems in the software, exploratory testing can include a combination of charters to determine what aspects of the software are being used. Personas can also be used to mimic expected user behavior. Combinatorial test tools can generate a set of configurations that can be used to ensure most of the different configuration settings are covered without having to test every possible setting. 

Using one or more of these tools or techniques will help to find areas of the software which need the most attention, improve test coverage, and reduce customer reported defects. More importantly, they will help to reduce the number of highly visible, critical bugs which could reach the customer. 

6: Nobody Can Predict What Customers Expect

Even though everyone on the team tries to act as a proxy for customers in terms of the user experience, unless the team is using what they are building, they won’t  be able to accurately predict customer expectations. Customers are going to have diverse needs and use the software in different ways. As a result, defects are likely to fall into three general classes:

  • Stealth feature requests/enhancement requests. The usual cause of this kind of defect report is miscommunication. Somewhere along the communications chain customers and users built a different idea of what the application was meant to do than the development team. 

  • Defects as feedback. This kind of defect is likely to happen when customers and end users have different ideas of what the software should be doing. I've also seen it happen when a design decision is forced by a high-ranking person outside the usual team.

Personal experience:

A new C-level executive at my employer noticed a lot of inefficiencies in the way internal users handled our software suite. He decided that the system needed to change to create a smoother experience for the people who needed to maintain multiple accounts on the system (perhaps 5% of our user base). When the changes finally rolled out after a 12 month delay and a great deal of problems, most of our users loathed the change. Over 2 years later we're still dealing with some of the fallout.

  • Actual defects. 

A good team will understand most customer expectations, especially the most common ones. When customers and users outnumber team members, it's simply not reasonable for any tester or test team to cover everything. A risk-based approach and knowing the demands of the largest, or most important, customers can help to minimize customer disappointment.

When it comes to features requested by customers, whether filtered through project managers or not, testers and developers also have to contend with the problem of unknown unknowns. Sometimes a problem that's reported is a defect. Even with the best communication and requirements gathering, it's not uncommon for a customer to find that the problem (bug) they thought they had is a different problem entirely. A tale I was told in my software engineering degree comes to mind:

A high-rise building management group complained to their elevator contractors that the lifts were too slow and their tenants were complaining to them about having to wait too long for the next lift. The contractors sent a team to review the problem and come back with some solutions. After spending a day watching people waiting for the elevators and monitoring the elevator response times, they recommended a simple solution. 

The solution? Putting mirrors in the elevator lobbies. Complaints stopped because the people waiting used the mirrors to check and fix their appearance. The wait times weren't excessive but because the lift users had nothing else to do, they perceived the wait times to be longer than they actually were.

This story isn't just a prime example of a customer's needs not being precisely what the customer believes their needs to be, it's also a pretty good example of a defect report that really isn't a defect. 

Personal experience:

One customer was notoriously change-averse, to the extent that they would create defects when we changed our default font-decoration from regular to bold so it would be more visible. When they started user acceptance testing for a major version upgrade, they learned that we had corrected a defect where in certain circumstances a value would be recorded as a debit instead of a credit in the general ledger. We were required to provide them with a freshly "broken" version of the software that maintained the incorrect behavior because they did not have the extra budget to reprogram their custom data pump from our system to their accounting system.

The simple fact is that we aren't necessarily aware of what customers are doing with the information and software we provide them, so when we change and add to our software's feature set, we may not know what the downstream effects will be to our customers. All we can say is that there's a non-zero chance our customers will report those downstream effects to us as defects.

Collaborating with customers can help to minimize issues like this. Asking customers if we can cooperate with them for their user experience or user acceptance testing, or building personas around customers' needs, can help reduce communications misfires where changes can cause problems for customers.

7: Teams Don't Decide What To Release

The decision on what features or bug fixes go into a particular deployment are typically made at the business level. Developers and testers may have some input into whether lower priority changes are ready for release, but they are rarely the people who make the final call. In an environment with continuous integration/continuous deployment, the release decision can be replaced by the decision of when to enable the feature in question and which or how many users get to see it. 

The features and bug fixes going into a particular release can depend on factors such as:

  • Whether the feature/fix has a contracted delivery date. I have seen more than one occasion where an evaluation beta has been released to a customer because of a contracted delivery date. The beta was just barely enough to meet contract requirements but was far from ready for the customer to use in production.

  • Other business priorities. Sometimes a business need elsewhere can force feature release at an inconvenient time for the development and test team. This can include top level management promising a feature to one or more customers on a specific date without consulting the team. I have seen this happen more than once and it isn't a pleasant experience. 

    • That manager learned rather quickly that when the team gave a time estimate we were pretty accurate. It wasn't the most pleasant learning experience, but the team learned to be more assertive with our managers and negotiate for a minimum viable product in situations where something had to be released quickly. 

  • Other contracted dates which would be harmed by not releasing the features now. This doesn't happen nearly as often, but it can occur where team resources are tightly constrained, or where the current feature set is a prerequisite to contracted items. 

  • Which team members are available when. In a small team or one without much cross-training, one person's illness can cause software to fall behind planned release dates. I've seen this happen all too often, particularly with legacy software.

    • My current team has an active program to cross-train everyone in the team so everyone has a backup for every task. Our goal is that every task has at least two designated subject matter experts who consider themselves at least 90% capable of handling the task. We schedule knowledge transfer activities each sprint and allocate up to 5 hours per sprint for it. As a result, while we might be slowed down when the primary expert is out, our team can keep moving forward.

The decision on what is being released at any given time is usually decided by some combination of:

  • High level managers wanting to meet business goals. They may not be aware of the constraints development and test teams are under. Or they may not realize that in many cases, software development can have hidden complexities causing changes to be far more time-consuming than might be expected.

  • Product managers who often work with customers and want to please those customers. A product manager's eagerness to please customers led to partially engineering a time-consuming solution where a simpler change could have met the need behind the original request.

  • Other managers who may require changes to meet their own deadlines. This can occur when working with other departments in the company. I've seen many situations where another department using the software needs a specific change by a certain date due to their own requirements. 

    • I work with payroll software, so it's very common for me to work with deadlines driven by Government due dates. We often need to make changes for our Tax Filing department so they can ensure our customers using their services have the correct information. 

In my experience, it's very rare for testers or developers to make the call on what should be made available to customers when, even in continuous integration and continuous deployment environments, although with good communication between team members and decision-makers, the risk of deploying something that isn't ready for customers is much smaller.

8: Teams Don't Decide When To Release

Much like the decision of what changes, features, or bug fixes are being released, team members rarely, if ever, control the decision of when changes are released. The role of the tester is not to stand at a mythical quality gate proclaiming "You shall not pass!".

It's much better to inform our team leaders of problems we know about, and any risks we're aware of. Particularly, if we're informing our leaders of the risks of known problems and our estimated risks of not testing a change the team is concerned about. The team is responsible for providing information that the people who make the decisions can use. 

The release decisions are made for reasons that include but are not limited to the state of the feature as testers and developers have assessed it. Development team members typically don't have enough insight into business decision-making processes to be able to decide whether or not a change can, or must, be released. In short, it's all about influence.

9: Quality Is Everyone's Job

Despite testers often being labeled as quality assurance or quality analysts (I personally prefer quality advocate), software quality isn't solely the tester's responsibility. Everyone on the team, including project and product managers, carries the responsibility to do their best to produce high-quality software.

What constitutes high quality for a particular organization and product will not be the same as what is considered high quality for other products. Since each application someone uses has a particular purpose, customers can reasonably expect the attributes of each application, including its quality, to be appropriate to its purpose. 

With everyone on the team working together to make the software as good as it can be within the constraints the team is under, fewer obvious bugs will reach the testers. As a result, testers will have more time to investigate the software for less obvious bugs. That, in turn, means that the software will be likely to have fewer and less severe bugs than software developed in an environment where testers are believed to have sole responsibility for quality.

10: Team Members Are Human

Everyone in a software project works hard to help create software that is as good as it can be when it reaches customers. Nobody can make the software perfect: there will be code errors, logic errors, and things that simply don't do what the customer expected.

The fact is that everyone involved in creating software is human. Everyone involved in using software is human. Humans make mistakes. This is something that's so obvious people tend to forget it, particularly when they have high expectations of something.

There's an old joke that as far as I can tell originated in a comment by the architect Frank Lloyd Write that "A doctor can bury his mistakes, but an architect can only advise his client to plant vines." Testers don't have that comfort: as with the rest of a software team, our mistakes have a tendency to be in customers' faces.

A culture of forgiveness and reflection, where the whole team works to determine what caused the problem and build ways of preventing similar issues in the future goes a long way towards improving the software, team morale, and if the customer is included in that culture as they should be, the customer's perception of the company and the quality of the software.

So Why Are There Complaints About Testers?

In my experience, if the software works well (or well enough to satisfy customers), testers are rarely thanked for their efforts in finding problems and advocating for the problems to be fixed. But when problems arise, testers are often blamed for not finding them or even for not fixing them. I sincerely hope that I am in a small minority here. 

Personal experience:

After one particularly unfortunate release, a long-term customer who had paid for a great deal of customization work over the years wrote to the company president offering to send the test team to get a remedial high school certificate because obviously we couldn't read. The team started investigating, and quickly found that the actual reason for the bugs we missed was that we had no idea how that customer configured a specific feature and their configuration was unique to that customer. As a result, we had never seen the problems the customer found so infuriating and obvious.

We arranged for the customer to send us their configuration data so we could have a permanent setup that matched theirs, which prevented future problems like this. We could only do this for that customer because they were both one of our largest customers and one of our oldest. The extra time and effort to check new features against this customer's dataset would not have been worthwhile if the customer had not been so large. 

The only reason for people to complain about testers that I can see is that testers are usually the last part of the software "supply chain" in the sense that testers interact with the software after it has been coded. For many people, the last to interact is the first to be blamed when something goes wrong, no matter how irrational that reaction is. Many people still believe that testing is something that happens at the very end of software development, that testing is something performed by following detailed scripts, or that testing can somehow inject quality into software. 

Similarly, many people think that QA is all about not letting software out of the gate unless it's "good enough", a perception that's unfortunately built into the very idea of quality assurance in software.

It's human nature to react this way to unexpected problems, so the best option for the organization is to communicate with customers as much as possible so that they are aware of potential issues as early as possible. For the team, the best option is to communicate with each other and their management, to engage in programming and testing practices that help to improve the overall quality of the software, and to learn to advocate for issues to be fixed as soon as is reasonably possible. 

We testers need to continue to advocate for the truth of our profession. The more we can do to build a quality culture and encourage whole team testing the more our team-mates will understand what we do best - and the more the team will be able to prevent defects. 

We may still get blamed for defects, but knowing that we've done the best we can to minimize any problems and inform customers of anything that couldn't get fixed in the time frame we had, allows us to accept that sometimes things won't go as well as we want.

References

Author Bio

Kate Paulk refers to herself as a chaos magnet, because if software is going to go wrong, it will go wrong for her. She stumbles over edge cases without trying, accidentally summons demonic entities, and is a shameless geek girl of the science fiction and fantasy variety, with a strange sense of humor.

In what she ironically refers to as her free time, she writes. Novels, short stories, and articles for the Dojo.  

Twitter: katepaulk (although I almost never use it)

You Might Also be Interested In

A Software Tester's Guide to Influence by Kwesi Peterson

The Awesome Power Of Hosting Test.bash(); 2021
A Pairing Experiment – Katrina Clokie
🎙️ TestBashX Brighton Speakers Announced!
Discussion: Driving a Culture of Quality
My Fulfilling Experience at TestBash Netherlands 2019
Communities Of Practice, The Missing Piece Of Your Agile Organisation - Emilly Webber
The Art Of The Bug Report
How to Defuse a Bomb... Wait, I Mean a Bug - Michele Campbell
Ten Reasons Why You Fix Bugs As Soon As You Find Them
With a combination of SAST, SCA, and QA, we help developers identify vulnerabilities in applications and remediate them rapidly. Get your free trial today!
Explore MoT
TestBash Brighton 2024
Thu, 12 Sep 2024, 9:00 AM
We’re shaking things up and bringing TestBash back to Brighton on September 12th and 13th, 2024.
30 Days Of Agile Testing
Learn all about how testing fits into an Agile context with our 30 Days of Agile Testing!