When Testers Deal With Process Debt: Ideas to Manage It And Get Back To Testing Faster

When Testers Deal With Process Debt: Ideas to Manage It And Get Back To Testing Faster

by Melissa Eaden

Technical debt is anything related to the software under development, whether directly or indirectly, which is no longer useful to the development process or the maintenance of the application being developed.

Process debt is a kind of technical debt generated when process are poor or lacking altogether to handle things like defects, documentation or even test cases.

Technical debt of some kind will always exist. There are organizations that fight the good fight and try to keep it at bay, and some who have left "I.O.U." plastered in various places as a reminder to "clean up failed unit tests", "redo documentation", "update dependences for the application" or "refactor old code." These things continue to be pushed to the backlog while more important business stories are addressed. 

As a tester, watching the struggles of developers to pay down tech debt, even minor tech debt, is a struggle we should sympathize with. Testers, with their software development partners, tend to advocate for cleaning up technical debt out of concern for quality. One area that testers usually end up dealing with is Process Debt. This eats away at time better spent testing the software, much like technical debt eats away at development time. 

Warning Signs Of Process Debt

  • Is a significant amount of time spent managing documents? 
  • Does it feel like the defect backlog is a time sink for you and everyone else on the team? 
  • Do you find that your test case management system is full of test cases that have little value? 
  • Are the specifications or acceptance criteria (and the location of them) for what the team is developing unclear? Do they change depending on the person or group you ask?

This is a small fraction of non-code technical debt or process debt. Process debt can slowly creep up on any organization. The goal of the article is to give testers tips and ideas they can use or collaborate with their development groups, in an effort to gain back valuable software development time for testing. 

"This behaviour is often likened to the sign that says "everyone does their own dishes" at shared kitchens, but usually there's a pile of dishes piling up anyway because people always find a reason not to do them right now. Until there's this one person that actually does it for everyone. Very often, on a development team, this person is a tester."  - Beren Van Daele

Making Defects Work For Software Development

Defects can be useful and point to places in the code where improvements can be made. Take ants as a great example. They are amazing builders, however, they can also become very destructive when agitated. An organization can be  overwhelmed, like a swarm of angry ants after you've disturbed them, with defects due to lack of a good plan in place to handle them. 

Implementing a Defect Management Plan should be a priority for any software development organization. While most organizations do have a DMP, most don't specify what they are doing with aged defects. 

Aged Defects are any defects which are not resolved in an appropriate amount of time. 

Defects do not get better with age, especially if it's code related or related to user workflows. It's a kind of code rot, which can cause even more problems over time. Making sure a DMP has a defined way of handling these can keep everyone focused on the more important issues and give other departments a way to gracefully explain why a defect has been closed to stakeholders or customers. 

If there isn't a plan currently in place, or the plan lacks some key details, working in changes over a period of time can help turn a defect backlog around. An organization might be tempted to drop one defect management tool and switch to another to accomplish this. Be wary of doing something this drastic to fix defect management problems. It's rarely the tool causing the issues the organization might be encountering. When you replace tools without a good plan in place for handling defects the problem it's likely to come back after a period of time when folks in the organization adapt to the new tool.

Example of a Good Defect Management Plan

This is a list of ideas which could help manage a defect backlog which has spiraled out of control. Getting your department or organization to opt-in to a defect plan might be the hardest part. If you already have a plan in place, see if there is room for improvement if problems still seem to pop up. Share what you have learned about managing defects with others outside your organization, either by writing about it, or leaving a comment to this article. Here are a few good places to start with a proper defect management plan.

  • A triage team which should consist of, but isn't limited to, a tester, a developer, and a project manager or business analyst. Ideally this team would meet once a month, or quarterly, to review any defects. If defects are not divided up by departments or business segments already, make sure stakeholders from those areas are invited when defects which concern them come in for analysis. Getting an opinion from a DevOps or a Platform person on issues in their domain is a good idea. 
  • Have a good pipeline for getting defects from customers and/or production into the development backlog. Whether they are enhancements or actual issues, making sure they are addressed in a timely manner can help the application and customer support's reputation. Someone could be designated to help understand customer priorities and work with prioritizing customer service defects. This person could be a BA or often I've seen the QA manager or QA Lead take on this role and be the customer service representative in the triage meeting. However it happens, make sure there is a good process in place to keep communication open.
  • Have a defined set of criteria for any defect the triage team handles for analysis. When an organization hasn't looked at low priority defects for months or maybe even years, defining an SLA (Service Level Agreement) for defect management is a very good idea. 

Examples Of Possible Service Level Agreement Criteria:

  • Any defect which doesn't have complete information, close or return to the originator with a note to that effect.
  • Any defect which hasn't been looked at or commented on in over six months, close with comment that it won't be fixed.
  • Any defect which has been assigned to more than three people without resolution, it should be escalated to the project owner immediately for a resolution or backlog status.
  • Duplicate defects should be closed immediately and referred to the parent defect.
  • Close any defect related to a deprecated part of the application.
  • Close any pre-release defects which were related to any release pushed to production six months or older.

Sometimes, much like emails lingering in an inbox, using a filter and removing the oldest and least important, makes an inbox relevant again. An organization might stipulate a certain time period in which this can happen, but often closing an old defect with "Will Not Fix - Over Six Months Old" can either be a welcome sight for some or ring an alarm bell for those that weren't paying attention. 

No Defect Policy

Some organizations have taken an extreme view by eliminating the defects and the subsequent backlog associated with them. This is a no-defect-policy which can certainly be used while software is in development. If the application in production is working, and has customers, it's a little tougher to implement this policy, but it can be done.

Here are a few guidelines for implementing a no-defect-policy:

  • A defect: must be resolved ASAP.  
  • An oversight: Create a new story and put in backlog to be prioritized
  • A thing of no importance: Remove it from existence.

The mission of any triage team should be to see that any defect remains relevant to the process. If it is irrelevant, update it. If it is broken, fix it. If it is useless, close it and comment why. Defects should not linger in a backlog for more than a few days with this process.

Bonus Tip: If it's handled by project management, but it's obvious there are problems, share these ideas with that part of your organization. It might make it easier to deal with for everyone and they might thank you in the process. 

Tips For Cleaning Up Test Cases 

Test case management systems can become a dumping ground for useless test cases which look nice for metrics, but testers immediately know how useful a test case is by how often it's used and how many times someone uses it. 

Hoarding test cases only causes more confusion in the long run. Eventually someone will look at the problem and wonder why the test case management tool isn't more useful. Avoiding that scenario and keeping the test case strategy relevant and valuable should be the priority of the tester(s) and management.

Research and Analysis Of Test Cases

Use this list to help you identify tests which need to be updated, archived or deleted:

The date of the last time a suite or individual test case was run

Use the date to determine if the test case needs updated or archived

The date of the last time the test case was run manually

If the test case is automated, make sure it's marked to note that

If the automation is no longer running or covering the test case archive the test case

If the automation was changed but not the test case, update to match the automation

Average number of steps per test case

A large number of test cases with a lot of steps and multiple runs within a cycle could mean testers are skipping steps which are irrelevant and could signal the test case(s) are useless.

The number of prioritized test cases per category, like Critical, High, Low, and Rarely.

This can prioritize test cases run during a release and can determine which test cases are no longer needed in the test case management system. 

Sharing The Upkeep Pain

Maintaining test cases can be a pain. Here are a few tips which could help a team maintain test cases and show value.

  • Work with team members to review test cases by section or by feature possibly monthly or quarterly. 
  • Eliminate duplicates.
  • Prioritize test cases to be automated.
  • Simplify any which seem overly complicated for what they are covering. 
  • Experiment with different test case templates to help simplify test cases.
  • Encourage anyone to delete or archive test cases which no longer work or make sense.
  • If maintaining a record of test cases executed keep a test case from being updated or deleted. Exporting the information after a period of time to maintain the information is a good idea and allows for changes to the test cases without loss of information.

For A Team Of One

If you are the only tester on your team, make it easier on yourself when trying to maintain your testing documentation. If you need to show a pass/fail status or need to have documentation, advocate for changing to a note taking tool which you can write test as you are testing, like Rapid Reporter. 

Using simple checklists, mind maps or a creative visual method outlining the business acceptance criteria or functionality could help keep things simple and lightweight enough for one tester to maintain and communicates testing efforts to the business or even to the next tester taking your place when you have moved on.

Alternative To Test Cases

Unless there are regulatory reasons to have test cases, maybe getting rid of test cases and moving to a less proscriptive style of testing might work well for your team. Exploratory, Session, Scenario testing, or using Charters could be a better options for maintaining testing information and visibility into the testing process. Being flexible in your testing style will depend upon the quality of the code produced and the maturity of the organization. These styles advocate light-weight testing documentation and using or reusing what's already available from the product owners such as acceptance criteria or business rules. 

Consolidating And Maintaining Documentation

A source-of-truth is one centralized location or repository for all up-to-date information available on an application. Versions could exist in other locations, but the one valid, timely version should always be located in a centralized, accessible, organizational shared source.

Having too many tools to reference for workflows, wireframes, business acceptance criteria, etc can be a nightmare for anyone trying to test an application with what they think is valid information. If an application is large and handled by several departments then out-of-date information can quickly cause havok. Advocate for keeping any relevant documentation in a designated source-of-truth everyone can agree on. Also advocate for keeping those documents up-to-date when stories come through the pipeline. The following examples below are some ways to maintain good practices with various source-of-truth methods. 

  • Create a tagging plan for documentation which can be used as keywords for search purposes. 
  • Create and organize documentation with organizational structure of the business in mind. When departments change or get renamed, have an idea or plan in place to update, move or archive the documents associated with that department.
  • Create meaningful documentation. Empty documents or documents with just headers don't serve a useful purpose for anyone. Eliminate anything which isn't useful
  • Designate a person on each team, department, or organization to maintain and update the source-of-truth. This assignment can stay with one person or be passed from person to person in a designated time period like a sprint, or a release, or every quarter. Even better, if a document needs to be updated with a story, make it a ToDo for someone in that story. Better to update documentation when it's relevant than later when someone discovers the documentation isn't up-to-date.
  • Don't be afraid to archive any documentation which no longer serves a purpose. - Be polite and give folks who maintain their pages an option of keeping it around or putting an "evergreen" status on the page.
  • Transparent and accessible documentation is the key purpose of the source-of-truth. Make sure it's available to everyone regardless of position in the company. If security around information is an issue, perhaps it can be handled via an NDA agreement. Restricting access to the source-of-truth serves little purpose. Documents which are unfinished or located somewhere other than the designated source-of-truth should not be considered valid.

Gist/Git And ReadMe Files

A new trend based on an old concept is using ReadMe files or creating documentation which travels around and lives with the code. This makes the whole organization responsible for updating and maintaining directions, requirements, and even feature details with the code base. 

It works well for smaller applications, open source code, and start-up companies. Companies adopting this style of documentation tend to have organizations which are highly technical and have good training tools in place to use this style of maintaining documents. 

Implementing this style on a team level could also be ideal for maintaining application requirements, setup, best coding practices, and testing documentation. 

"My favorite was when I did an audit of our nightly unit test run and pointed out the exact group of tests that were taking the most time and asking the dev team why and what we can do about it. Afaik that report is still not being looked at and that question hasn't been answered." - (Contributor from Slack)

The Not So Quick Fix

If a company is suggesting a tool change, the helpful alternative to this kind of disruptive change is to figure out what's wrong with the current tool and the process around that tool. If changes can be made to make it more useful for everyone rather than changing tools, then that would be the better option. However, sometimes management changes and issues outside of a team's control can trigger a tool change. 

If an organization decides to change tools to better centralize documentation, then notifying and getting buy-in from all the parties using the original tool should be the priority along with planning a migration to the new tool. 

If you have to maintain multiple tools for documentation due to security concerns, business concerns, or just good old office politics, then referencing across tools where necessary should be a requirement for any documentation created. Make sure to share the pain of cross-referencing loud and often. When there are too many tools to reference team members become overwhelmed and simply ignore the tools all together. This can start the cycle over again which probably triggered the desire to change tools in the first place.

The Devil Is In the Details: Automation

If your organization isn't maintaining or evaluating your automation code like the application code, you already have one problem that needs to be resolved. If you are maintaining it with and like the application code base already, then making sure it's updated and maintained is the next hardest part to keeping automation testing relevant. Here a few good ideas to keep automation test cases relevant.


Let everyone maintain automation 

Developers, testers, and project managers - basically anyone that has access to the code base and the automation who knows how to update a failed correctly test should. Or pair with someone that does.

Having an automation lead might be a good idea

Leads can pair with anyone to fix a test case and they can help maintain the framework and do code reviews for those adding tests to the automation suite.

Make sure your automation has clear naming conventions and failure messages

Naming conventions should be easy to match with functions being called and the name of the test case in the automation stack. If there is a manual test case the automation case is mirroring, then it should be easily related to that as well.

Failure messages should say exactly what test case failed and at what line. If you are the only one that can understand where it failed, you've already created technical debt in your automation.

Maintain and update supporting services

Example: If you have a test database the application is using for testing purposes, make sure someone has a plan to update and maintain it. Any test databases should mirror production. 

Micro-services: Any service production is using should have a testing side version. Help keep those up-to-date with the automation. Automation should be covering these services interactions with the application, whether it's unit tests or integration tests, if it's not covered it's a weak spot in your automation strategy and automatic technical debt.

Functional Libraries: These don't maintain themselves unless the framework is designed to keep them on the latest versions. Often libraries are updated with the OS or browsers they are supporting. Automation leads should help maintain and keep the supporting function libraries up-to-date. If the automation is tied into a code base, then half the work is done. 

Equipment, Licenses, and Tools

All development organizations have a kind of technical debt which is hidden in plain sight. Equipment, licenses for software and unused software or hardware tools should be maintained in partnership with DevOps, the internal IT organization, Customer Service and the Software Development team(s). Creating a strike team to review equipment, software needs, requirements, support and licenses every quarter is one of the best ways to handle this kind of technical debt. 


Keeping track of devices, researching when devices fall out of common use and having a way of recycling devices or selling them can help save the organization a lot of money in the long run and keep development teams happy. A cloud service might be the best option for having access to a lot of devices without the need to maintain them. Using emulators for a majority of development and testing work is also a way of keeping down cost and maintenance of physical devices. Implementing solution which uses a combination of these methods could be the best option.


Maintaining a list of licenses needed with expiration dates for software that is critical to the production environment is a great idea, but if you never look at the list, you may never know when those dates come up. Create a calendar with dates that can let you know when to start the process of renewal for a software license and when that license actually expires. It only takes one time for a critical license to expire to have management wondering why production was offline due to a license expiration. 


Keep your software support documentation up-to-date for development, support and the customers. When new browsers, devices, plug-ins, third party applications, and OS's are released, it's important to decide within a timely manner what will be supported and when. Making it the mission of a strike team to coordinate evaluations and updates can help an organization maintain and pivot faster when changes in the technology ecosystem happen. 

Other Things Which Add To Non-code Technical Debt

Here is a quick list of things which should also have some kind of management plan or a way to handle effectively for a software development group.

  • How to deprecate or retire any third party tools which no longer have support.
    • These could be tools inside the application or supporting the application development process.
  • Delaying or Skipping Non-Functional Testing 
    • Ex: Load, Performance, Stress, Security, Cross-browser

Sunken Cost Fallacy

The Sunken Cost Fallacy is being afraid to get rid of or change anything that doesn't really work. This can fall under the "What if we need it again!?" in some obscure, far flung scenario, or "It took so long to build and maintain, we can't throw it away!" mentality which doesn't let a testing organization move forward or innovate with better ways of doing things. Or possibly the worse one might be "We spent a lot of money on X. You have to use it even if it doesn't work." Often when this has happened, it was management looking at tools without the input from the people who would be using the tools. 

Working through these issues can turn into a full time job all by itself. There are specific roles in companies that deal with issues of this nature around how the whole organization works. Change is not easy. However, sometimes it's necessary to reach a goal. 

Changing anything that doesn't work can take time, patience, and a lot of diplomacy to make the change effective and beneficial. Being an advocate for process changes can feel like a lonely, and very vulnerable spot. Often, roadblocks to a better system or any change can be overcome by listening to those who are in opposition and discovering their concerns. Addressing concerns can get an advocate more by-in for a process improvement. Rushing any change through will only invite backlash and hardship. Getting everyone to see the value in a new process and adopting it can take a lot of time, but that time will be worth it to achieve a better process which can help obtain a goal for your department and others. 

"Million plus line web application in classic ASP - no automation, and no consistent testing. Also no consistent anything else. I'm the one who got the team onto a single tracking tool to manage the workflow. And had to "train" them to have me involved in design from the start - by asking the awkward questions when I got the work, and setting off a bunch of rework." - Kate Paulk

Ask Questions

The best way to deal with any ideology mired in the sunken cost fallacy is to ask questions like why a particular process was chosen or if there will be a time when an experiment could be done to help change the process. Keep asking different questions until you can gather all the concerns. Address those concerns before suggesting any process change. If the change itself can address concerns directly, even better. Be careful about disrespecting the current process. Someone put time and effort into creating a process in the first place. Approach that person with your concerns in a respectful manner. Blunt conversations could happen, but try to maintain a respectful tone when they do. 

Demo, Demo, Demo

Never be afraid of demoing something whether that's a process, a piece of software or a concept. Showing someone how something works can communicate the value of it pretty effectively. Doing a demo is a lot easier than trying to move an organization to whole acceptance of the tool or process before they have seen it in action. It can also be a chance to address any concerns management or the development organization has immediately. 

Sometimes The Answer Is No

Be prepared to hear no. Even if you've done everything you could within your power to get buy-in for a tool or process, or keep an unproductive change from happening, sometimes you will lose the argument. This happens for various reasons and it might not mean the door is closed forever. It might be that you could revisit the issue in a year when everyone is feeling the same pain points. 

List Of Pain Points

Sunken cost fallacy can creep up anywhere and be a legitimate concern in dealing with process debt. Maintaining a list of pros and cons for any tool/process/concept over a period of time, by gathering thoughts from emails or surveys, could help sway the decision makers into a better process or acceptance that the tool/process/concept they wanted doesn't work as well as they thought. 

All of the strategies and ideas listed above can give you numbers to back your opinions and observations. Don't discount cost either. Business-minded folks will absorb numbers and cost ratios better than opinions. Metrics around time with utilization and maintenance along with cost can be clear motivators for change. Having metrics that specifically address pain points is always a good idea whether those pain points are yours or the decision maker's. 

Debt Happens

Even a good organization can incur process debt along with technical debt. With vigilance and acknowledgement of issues around debt by the development team or the organization, debt can be kept low and maintained with relatively painless results. The point is to keep from getting into a debt situation in the first place.

Sometimes the only way out is to declare bankruptcy on the debt and start over. That's not the most desirable option and it can be more destructive than fixing the process, but it's one option to keep in mind. It has been successful on occasion when a tool or process becomes particularly useless and very few people reference it. 

A QA manager found herself dealing with defects which were three or four years old. Thousands of defects had never been looked at or even referenced again after their initial creation. There were so many in that state, it was going to be a very long task to clean everything up. Instead, she decided to have the tool's database wiped and she started over. This was planned with customer service so that they could save off any information on active tickets and recreate bugs. After the clean slate, a defect handling process was put into place and the number of active bugs were tracked with a threshold limit put in place to keep them from getting out of control.

Whether you are aggressively dealing with your debt or watching it spiral out of control, don't stress out about it too much. At some point, it will be dealt with whether folks in the organization want to or not. As a tester, we can always advocate for better and easier ways of maintaining process and technical debt. Calling it out in standups and meetings is the first step towards improvement. Enacting an improvement rather than asking to do it could work as long as you are prepared for the possible fallout it could have. If that doesn't work, then it's possible your last step might be looking for other opportunities with organizations who have a healthier technical process in place. 


  1. Technical Debt Applied To Testing
  2. Technical Debt Nightmare For Testers 
  3. Introduction To Technical Debt

Tweets and Slack snip-its published with permission.

About Melissa Eaden

One fateful day in November of 2015, Melissa Eaden attended her first Ministry of Testing, Test Bash in New York. She won a conference ticket by submitting an essay to Richard Bradshaw. She didn't know then that she would later become Rosie's assistant minion, in the cause, which forever grows, taking software quality to whole new levels! By day, a mild mannered dog owner living in Austin, Texas. By night, she happily waits for the next mission of quality excellence for her to accomplish.

Explore MoT
Test Exchange
A skills and knowledge exchange to enhance your testing, QA and quality engineering life
Introduction To Accessibility Testing
Learn with me about what Accessibility is, why it's important to test for and how to get your team started with an Accessibility testing mindset
This Week in Testing
Debrief the week in Testing via a community radio show hosted by Simon Tomes and members of the community