Adding Value Outside Automation
By Steven Burton
"In a team of developers who write quality automation, how, as a tester, can you add value?"
This is a question I now ask in every interview I do with testers. Many of the testers I see have good automation skills and work on a team where this is the biggest area of value they add. However, when everyone is not only capable of writing the automation but does a very good job of it too, you need to be able to add value in other ways.
Fortunately, there are plenty of ways to do this!
This is a situation I experienced myself recently. I'll explain what it's like to be a tester working in one of these teams and many of the ways you can add value.
I'll also be providing some links for further reading where you can delve deeper into some of the techniques.
A Team With No Testers
Recently I changed jobs and the team I have joined is the most highly skilled team I've been on when it comes to testing. Which was a surprise to me because there are no actual testers on the team! So why is this? What does the team do that makes them so strong?
- Everyone is a tester: The team all take responsibility for the testing and for the quality of the product
- Automation: All features are developed with automation checks at all levels by every team member
- Pipelines: There are pipelines for every environment which incorporate the automated checks (see Michael Bolton's blog for an explanation of why I use the word checks here)
I am technically quite strong, but not as strong as engineers who have always been developing. So given all the above, my automation skills alone were not going to add much value to the team and I had to look to add value to other ways. Let's examine some of those ways.
One of the main areas where value can be added is in the processes the team use. Whether the team is using Scrum, Kanban, or any other methodology there are always improvements that can be made. I'm going to give a few important examples in detail, but there are lots more places a tester can look to make process improvements as well. So always be on the lookout for where you can add value in this area.
This is something I passionately advocate for in any team and it's the method of creating the tests before creating the production code. This can take many forms though. For example:
- Test Driven Development (TDD) is a strict coding method involving creating a unit test within the code, making the test pass and then creating another.
- Behavioural Driven Development (BDD) involves creating behavioural scenarios that describe desired functionality and then satisfying these with the code. These scenarios often take the form of automated checks but they don't have to.
- Acceptance Test Driven Development (ATDD) is taking TDD and applying behavioural language to the tests but without the wider scope of BDD.
A lot of teams will not be using the above methods and a tester can look to add a lot of value to a team by helping them to adopt one of the above processes or a similar one.
Pairing is one of the most powerful things a team can do and has so many benefits including:
- Knowledge sharing: pairing doubles the number of people who understand the feature that's being worked on
- Code quality: multiple people means that code can be reviewed as it goes
- Feature quality: multiple eyes means it's more likely that issues in the code will be spotted
- Test first: pairing with a tester (or two developers) will enable test first to happen more easily
Despite there being so many benefits many teams often do not pair. Generally, this is because it's not a natural thing to do for a lot of people and team members may prefer working alone.
As a tester, you can look to encourage pairing in your team. You could consider some of the following ideas to help:
- Practice what you preach: when you take a ticket you can suggest pairing and when others take tickets suggest you can pair with them
- Pairing board: sometimes gamifying things can encourage people to take part so you could suggest a way of tracking who has paired with a prize at the end of a set amount of time!
- Retrospectives: you could try raising pairing as a practice that you want to try for the team. Suggest trying it for a sprint and seeing how it goes.
A lot of teams have issues when it comes to deciding how and when to start and stop user stories, handling retrospective outcomes and dealing with code reviews among other things.
I've long been a proponent for the use of various different definitions to help with these decisions. The most famous or well-known one is the Definition of Done for a user story but there can be many more! These are some of the most useful:
- Definition of Ready (user story): Teams often have issues deciding when a story is ready to be taken into a sprint. This definition helps define that by stating what is needed for any team member to be happy to pick up the story.
- Definition of Done (retrospective): if you have trouble deciding what to do with retro outcomes then why not decide it as a team and write it down? That's what this definition is for.
- Definition of Start (user story): This definition is most useful when the team has multiple work streams, or even multiple backlogs, as it can lay out the rules for how a team member picks and starts the next piece of work.
Any artifact the team has can have a definition of start, ready, or done. As a tester, you can add a lot of value to the team by influencing them to adopt various definitions. Obviously, you don't want to go overboard and lose the team but a few definitions here and there, focused on the main pain points for the team, are a definite way of adding value.
A team may be very good individually in creating the automation when they create features, but often that is with a narrow scope. Where a tester can add value is in thinking about the bigger picture and how the testing of a single feature fits into the strategy for the overall product. Ask the following big-picture questions:
- What will be tested? Will you base this decision on risk, on usage or something else?
- How will it be tested? Although the team may be creating automated checks, what else will happen? For instance, will there be exploratory testing taking place?
- How are the environments used? How many environments are there and what is the usage for each one?
- Is the automation at the right level? Although the team may be writing automated checks, are they basing them at close to the code as possible, do they know the boundaries between the tests at different levels and are they running them regularly?
Testers tend to be very good at being close to the customer and understanding their requirements and the requirements of the product more than other members of the team. A tester can add value and use this knowledge to help ensure the backlog is refined and in the best shape possible as this can be something that is often neglected on a team.
Some common areas that can use improvement are:
- Prioritisation: Is the backlog prioritised at both the sprint level (if working in sprints) and the main backlog? If not, you could ask the PO/BA to set up some refinement sessions where you can define and prioritise the full backlog or you can suggest a quick check during or after every stand up to ensure the top of the sprint backlog is prioritised.
- Refinement: Are all the stories refined to a level that any member of the team can pick them up? Do they have all the risks and acceptance criteria detailed in the tickets? A definition of ready for a story can help a lot here by defining the level of detail you and the team expect from a team before you work on it.
- Behavioural: Are the user stories based on behavioural aspects of the product and real-life scenarios or are they based on implementation aspects instead? Suggesting that the stories are written in a behavioural language like “Given/When/Then” or “As a/I want/So that” will help to force the stories away from implementation and towards behaviour.
- Detailed: Are the user stories detailed enough to be able to work on but concise enough to be easily understood? Lack of details is usually a sign of a lack of refinement sessions (https://www.scruminc.com/product-backlog-refinement/), so suggest to the PO/BA that weekly refinement sessions take place and that you attend them. If the stories are too detailed, you need to try to pull out the key information in the ticket and make these the Acceptance Criteria. Ensuring you have these before any ticket is taken into a sprint will help the team understand what is required from the ticket.
- Three amigos: Does the team regularly hold three amigos sessions for each ticket and are they working as desired? You can suggest a three amigos step on your Kanban board to ensure every ticket must go through a three amigos.
Performance & The 'ilities'
Non-functional testing is another area that many teams neglect and could do more. Consequently, it's an area open for a lot of value to be added by a tester.
If you haven't heard the term "ilities" before it refers to a lot of non-functional requirements ending in “-ility" such as:
- Maintainability: How easy it is to change and update the system
- Portability: How easy it is to lift the product and move it between environments
- Stability: How much downtime does the system experiences
- Scalability: How the system copes with increases in load
These are some examples of non-functional requirements but there are many more. A tester can look at the product they are working on and examine how good the product is at the "ilities" by performing different types of non-functional testing.
Security testing is a huge area within software development, especially with the emergence of multiple web application systems. However, security testing is often either criminally under-utilised or it’s outsourced to an external company.
Security testing is a very specialised area so hiring a third party to perform the complicated tests on a system makes sense, but there are a number of security areas where value can be added by the team themselves. Take a look at the fantastic OWASP site where they have lots of details on security testing and how to get started.
Load & Performance Testing
Load and performance testing often take place together but they measure different characteristics of the system under test.
Load testing is the process of putting the system under a heavy load and seeing how it performs. It’s especially important in systems that will have lots of spikes in traffic. The type of load you put the system under depends on the purpose of the system and profile within the live environment. For instance, if the system is a media storage system it’s likely to have a high amount of uploads and downloads, whereas if the system is a website shop then it’s more likely to have a high amount of unique users. You need to find the metrics that are relevant to your system and use these to increase the system load. In order to perform load testing, you need to have a baseline of how much load your system is expecting to receive in live. To get this baseline, it's important to ensure that good monitoring is built into every new feature. This is something to consider when you are performing refinement on features - you may need to bring in your Ops department to the refinement if they are the ones that perform the monitoring.
Performance testing is testing how fast, reactive and performant the system is when placed in different scenarios. It’s another area of testing which is often forgotten or left until the end of the product cycle before it starts to be looked at. One area a tester can look to add a lot of value is by trying to move the team towards performance testing as early as possible. For instance:
- Perform performance tests on individual components
- Consider the performance of the system during 3 Amigos
- Ensure that your performance tests part of the pipeline
- Measure performance using monitoring and make these metrics visible to all
- Ensure that the components themselves have a high level of testability by exposing their metrics and ensuring they have triggers for any scheduled actions which automation can use
These are a few questions that I’ve used before to assess how good a team is at performance testing as early as possible.
Adding Long Term Value
There are so many areas where a tester can look to add value to a team.
The ability to write automated scripts is a great tool for a tester to have, but it is only one tool within a large toolkit.
If you are new to testing, then look to focus on multiple areas such as some of the ones I’ve described here like test first approaches, pairing, performance, security and backlog/process refinements and don’t think that you are not able to add value to a team if you have not done any automation before.
If you are an experienced automation expert, consider the areas above and think about your skills and knowledge in these areas as it may give you additional ways you can add value.
Steven is an experienced tester with over a decade's experience in the industry. After studying Computer Science at University, he landed a testing job for a medical company and has been learning and loving testing ever since.
Steven is a big advocate of CD pipelines, Agile (and Scrum in particular) and a team first attitude and has performed a variety of roles within and leading teams across many projects.
Steven has a focus on quality no matter what role he performs and enjoys participating in and learning from the wider testing and software development community both at home in Leeds and elsewhere.