How To Do Good Regression Testing
By Mark Winteringham
The Software Testing Clinic is a safe environment for those who are interested in software testing to learn and enhance their testing skills. It also enables more experienced testers to learn and enhance their mentoring skills. At Software Testing Clinic events, we get asked a lot of good questions and not all of them we can answer in detail. This series explores some of the more common questions attendees ask.
The question of ‘How do I do good regression testing?’ came up in the Software Testing Clinic section of The Club, but it’s a common question some testers struggle with. There is a dichotomy when it comes to regression testing. Not enough regression testing and you risk bugs slipping through to production, too much regression testing and you and your team become swamped. Finding the right balance of test coverage is an important part of regression testing.
To find the right balance, you’ll have to learn how to manage other key factors around your testing; like time, deadlines and risk. Addressing those factors can give you a better approach to regression testing and help manage the workload.
Let’s break down some concepts around regression testing, and try to understand the why, the when, and the how of a regression testing approach.
As with all testing activities, regression testing is about discovering information about risks that threaten our product. The risk we are specifically focused on when we execute regression testing is the risk of the quality of our product regressing or becoming worse. (Michael Bolton talks about this at some length in his excellent webinar on regression testing.)
Depending on the design, size and skill of your team, a regression has the potential of rearing its head across any part of the product, any time new code is introduced. Developing a plan and carrying out regular regression testing can discover if anything has changed in known areas of the application or software. It could also indicate if changes have impacted the product’s quality.
As our product grows and becomes more complex, the coverage of our regression testing increases. This can be problematic if we’re not careful. It can become so large to run that it can have a negative impact on delivery times; if we want to continue to regression test to a satisfactory level. So how do we avoid this but still ensure that we can keep on top of potential regression issues?
How & When To Do Regression Testing
Software development lifecycles impact Regression Testing
Regression testing typically consists of, in some form, executing a suite of tests/checks either by a human, machine, or both during regression testing session or phase. How you create your suite of tests is dependent on how you test as a team.
Regression Testing Approach 1
If you’re working on a test team that favours test cases over exploratory testing, it is likely that you will have your scripted tests stored in a tool which continuously grows as you create more test scripts for new features.
The explicitness of test scripts means it’s easy to pick out which to run during a regression testing session but requires constant reviewing and maintaining; ensuring that your test scripts are relevant and reflect the current behaviour of your product. This means removing test scripts that don’t match with the product anymore and updating others to remain relevant.
Regression Testing Approach 2
If your team favours exploratory testing, then you may have a stored list of charters, test sessions and notes that you refer to during regression testing.
An exploratory testing session using previous session notes can guide testers enough but also still free them to catch bugs that test scripts might miss. However, they require more skill and understanding of what has been tested so far to help determine what session to execute.
Team Structure Impacts Regression Testing
The structure of your team and how they develop software impacts when you carry out regression testing.
Working With A Waterfall Approach
If you are working in a phased waterfall approach, it’s likely you will be given distinct releases to test. As each of these releases come through, you will have new features to test and older features to regression test. Because there is typically little communication between developers and testers in a waterfall approach, testers are compelled to run full regression suites because they don’t know what has and hasn’t changed since the previous release.
Working With An Agile Approach
If your team is taking an iterative agile approach, a good agile team is looking to get something released and in front of a user as soon as possible to get feedback and adapt. This means smaller more regular releases or changes to run regression testing against. For example, if they follow continuous integration then regression testing will be carried out each time a developer pushes some new code and a new build is created. The collaborative nature of an agile team means testers should be more informed of what has changed and be able to determine what regression testing to carry out. However, this requires the whole team to take responsibility for regression testing and not only the testers.
Working As A Team To Deal With Regressions
The whole team should take ownership and work together to create an effective regression testing strategy. How we structure our project (waterfall or agile), design our system, and how often developers push code to an environment where everyone’s code is integrated can affect regression testing. It’s also important to look at a team’s maturity, culture and development approach and assess how that might impact your regression testing approach.
A good regression testing strategy encourages targeting your testing towards areas of the product that are more at risk of changing than others, instead of exhaustively testing every part of the product again and again. When you release a new version of a well designed product, your team can measure which areas of your product are more at risk of change than others, as other parts may have been untouched by the new code. Karen Johnson has created a heuristic to help teams analyse candidates for regression testing in the form of the mnemonic RCRCRC which stands for:
Recent: new features, new areas of code are more vulnerable.
Core: essential functions must continue to work.
Risk: some areas of an application pose more risk.
Configuration sensitive: code that’s dependent on environment settings can be vulnerable.
Repaired: bug fixes can introduce new issues.
Chronic: some areas in an application may be perpetually sensitive to breaking.
Ways To Help Targeted Regression Testing
To ensure that targeting our regression testing (instead of exhaustively running all our tests) works successfully, there are small things that you and your teams can do to help:
Release small and often - If regressions come from changes then the more changes you make in one release, the more risks you have to test for, which means more regression testing. Encourage your developers to release code to your test environments often or deploy sections of an uncompleted feature such as the backend or frontend of the system.
Model the system - The more you understand about your product, the easier it is to determine what might be affected by changes. Dan Ashby shared this novel way of modelling the system that works effectively:
"You get the whole team to build objects with LEGO blocks that represent functions, features or modules of the software and then connect the models using the string where the string represents any connection or integration between the features/modules/components/etc…
The idea being that if you know one area is going to implement code changes, then if you move that physical object, you can see the layers of integration and regression risks, with direct connections followed by secondary and third connections from those other objects.
If you don’t have LEGO blocks, you can do the same with story cards, where you write the feature/component/etc. on the card and use normal strings of equal length to connect the cards. Cards can actually work better too, as you can pick them up and physically see layers. but cards are less fun than LEGO toys."
Get involved in code reviews - If your developers carry out code reviews before pushing their code to a shared repository, then try to get involved in those reviews. You don’t need to learn how to code or read the code, but listening and asking questions about what might be affected will help target your testing.
Pairing - Alternatively, you could pair with your developers as they are programming and take notes. This is an excellent way to learn about changes to help you target your regression testing. To get started with pairing, Lisa Crispin has written a comprehensive article on the how-tos of pairing.
Automated And Regression Testing
Whilst a targeted approach towards regression testing is a practical way of dealing with regression risks, it doesn’t guarantee success. Sometimes bugs can still fall through the gaps. That is why some teams choose to adopt automated tools to assist in their regression testing. If automated tools are used correctly then the whole team can benefit:
Fast-to-run - Automated tools can do certain activities faster than humans can, so choosing the right problems to solve with an automated tool can increase the speed in which they are carried out. Anything that is very repetitive and can be easily automated should be the first things considered when implementing an automated tool or framework.
Parallel activities - Running automated tools on machines other than the team’s own systems, means that they can be run whilst other activities are carried out. Although this doesn’t necessarily mean that more regression testing can be carried out within the same duration in your team. Plan accordingly and consider your approach to automation in your regression testing approach.
Targeted feedback - You can use automated tools to work with parts of the system that might be trickier to get to when testing normally. This allows automated tools to potentially give feedback on very specific parts of the product which will help you target additional regression testing with greater accuracy.
This approach can be very useful, but again up-front thinking and careful planning is required otherwise you can run into issues. Some teams become overly reliant on automated tools and this can cause issues such as:
Incorrect assumptions - It’s dangerous to assume that automated tools can cover every detail and feedback on every potential risk to a product. Automated tools are designed to give feedback for very specific things, so you will still need to carry on other testing activities. Ignoring a balanced regression testing approach can lead to overconfidence in the quality of your product. Richard Bradshaw has an informative video that discusses this in more detail.
Maintenance overhead - There is a cost in learning, creating, and maintaining automated tools. Too much reliance on automated tools mixed with bad practises when using them can lead to a headache with maintenance. You could easily spend as much time fixing automated tools as you could running regression testing yourself!
Too much trust in automated reports - Automated tools can only tell you what has changed from the expectation you put into the tool and what currently exists in the system. It is up to you and the team to review these results. Use other testing techniques around a suspect area of the software to determine if there is a regression in quality or not.
Careful Consideration Is Key
Regression testing is something that requires careful consideration. Up-front planning between you and the rest of your team can give clarity and visibility to your regression testing approach. Ask questions about the product and project, and learn to work and search for ways to improve yourself and your team, then you will be successful. A team that collaborates and communicates well about regression risks can make regression testing smoother. It doesn’t need to be a drain on your resources, but rather another awesome tool in your testing toolkit.
Mark Winteringham is a tester, coach, mentor, teacher and international speaker, presenting workshops and talks on technical testing techniques. He has worked on award-winning projects across a wide variety of technology sectors ranging from broadcast, digital, financial and public sector working with various Web, mobile and desktop technologies. Mark is an expert in technical testing and test automation and is a passionate advocate of risk-based automation and automation in testing practises which he regularly blogs about at mwtestconsultancy.co.uk he is also the co-founder of the Software Testing Clinic in London, a regular workshop for new and junior testers to receive free mentoring and lessons in software testing. Mark also has a keen interest in various technologies, developing new apps and Internet of thing devices regularly. You can get in touch with Mark on Twitter: @2bittester