What does TDD mean for me, the Tester?
By Duncan Nisbet
I have some experience of working as a Tester in development teams practising TDD and I hadn’t really thought about what impact TDD had on me until someone actually asked me:
Test Driven Development (TDD) is a predominately Programmer driven practice, so what do you as a Tester think about it?
What a great question!
My immediate answer was that TDD has enabled me to gain an understanding of the software and associated tests so that I can add more value in the design decisions, highlight business risks and potential impacts to the GUI / Acceptance tests.
This article tries to explain in more detail how I work to provide value in the TDD environment.
Who are the different players in this game?
What do I think TDD is?
I thought I had the best start with what I believe TDD is. I’m certainly not the world’s authority on TDD so I will keep my description brief. The idea behind TDD is that you write a failing test, write only the necessary product code to make the test pass and then you refactor the code to be cleaner, more efficient or adhere to some coding standard. This is commonly known as the TDD Cycle (red, green, refactor)
TDD is primarily about the design of the code; the resulting tests are a nice side effect that we Testers can use to our advantage, so long as we don’t succumb to their illusion of 100% safety and confidence.
The TDD cycle provides Programmers with the fast feedback they require in order to make informed decisions about the design of the code. Vastly reduced feedback loops are also core principles of Continuous Integration (CI) and Continuous Delivery (CD) automation frameworks. Testers can and should be making use of this fast feedback loop – it helps prevents code from “being thrown over the wall” and helps to enable CI and CD.
The TDD cycle is perceived as beginning with the creation of a failing test. I feel this is misleading as my observations have led me to believe that adding the test isn’t the first step in the cycle; first, we need to think about what the failing test should be.
How can Testers be involved in TDD if they are not writing code?
Predominantly, Programmers are concerned about the code design and Testers are concerned about the behaviour of the software. But the lines between the two concerns are very blurred – Testers can and should think about design, Programmers can and should think about behaviour.
Testers are striving for testability in the design of the code. According to James Bach, the 2 primary areas Testers should be considering with regards to Testability are controllability and observability: Can we put the system into the state we require and then observe how the system is behaving? How easy is it to get the application into the different states required for the different tests? How will we know what the application is doing at a given point in time – will logging be implemented? What will that logging look like?
When talking to Programmers about the software, I like to use sequence diagrams (adopted from UML) to map out the requests and responses to different parts of the system. This enables us both to understand the flows through and different states of the system that in turn should help us in work out what tests are required where.
Here is an example of a typical web application:
Image courtesy of blog.quent.in created using www.websequencediagrams.com
My knowledge of the system and the Programmers in-depth knowledge at the component level compliment each other to help prevent unnecessary duplication of tests, or better still the prevention of invalid tests. We both understand the test strategy.
Here is a contrived example of a collaborative test strategy conversation between Paul (Programmer), Terry (Tester) and Carl (Customer)
Paul: “Hey Carl, Terry, have you got a sec?
Paul: “Great, pull up a chair. Our system makes requests to other systems outside of our application. Currently when those external, 3rd party systems are unresponsive or unavailable, we display error messages on the UI for each error. From a technical point of view this isn’t great – the UI shouldn’t be aware of these different errors.
Carl, are you happy for a generic error that indicates a 3rd party service is unavailable to be displayed on the UI?”
Carl: (after some deliberation)” Yep, I’d be happy with the generic error message.
Terry: “OK. We have a range of tests that check those error messages in the UI – are those tests still going to be valid?”
Paul: “No. We will catch the error in the backend service and throw the generic error message.”
Terry: “OK. What can I assert on in the UI then?”
Paul: “Do you really need to assert in the UI? We will be making an assertion on the error in our backend service.”
Terry: “So the assertion is there that you sent the error, but we do not know that the error has been displayed on the UI. I’d like to keep a test at the UI level to check that the error message is being displayed and that the user has a follow on journey from the error message.”
Paul: “Good call on the follow on journey I hadn’t thought about that. Carl, would you like the follow on journey to be the same as it is currently?”
Carl: “If possible, yes please”
Paul: “OK. It will take a bit more work but we should be able to do that. We will need to update the logging so we can see the new generic error being thrown – we can let you know what the changes are shortly.”
Terry: “Great, so I should be able to refactor one of the error scenarios to account for the generic error and remove the now surplus error scenarios. How can I consistently trigger the error scenario for this test?”
Paul: “The process of triggering the error should be the same as now, we’re just handling the errors differently. The existing mock of the 3rd party service should still be valid, but we’ll be checking it when we’re writing our unit tests so we’ll let you know. Everyone happy?”
Paul: ”Excellent! Let's catch up in a couple of hours to see where we’re up to”
The TDD Cycle in action
Let’s walk through the TDD cycle to demonstrate to you how I focus my testing.
Image adapted from Uxebu.com
Write a failing test
Programmers begin their TDD cycle writing code and listening to the feedback from the unit tests.
I start to think more deeply about my test ideas and the different edge cases I’d like to explore. I may start to write some scenarios that could later be executed through automation.
As the design evolves, more questions and problems will crop up which need answering.
For example, Programmers may come to me with a specific scenario I’ll need to cover higher up the stack as they tried to cover the scenario with a unit test, but the test proved inadequate.
Likewise, my edge cases may require design changes or tweaks.
Write enough code to make the test pass
The Programmers let me know when they have some code & tests to commit. We walk through the code and associated tests (predominantly “unit-integration” tests), discuss the acceptance criteria coverage and then I’m let loose to explore the software on the Programmers machines.
I try out a few of those edges cases I was considering and some mutation testing to check the validity of the tests. There is also the opportunity to show the software to the Customer to check we are meeting expectations.
This feedback cycle of failing test, code demo then commit is typically hours rather than days.
Depending on the progress of the software, I might demonstrate my testing of system to the Programmers as well as asking for help with automating some of my test scenarios.
Admittedly, I can never know for sure if the tests were written before the production code (in true TDD style), but ultimately I have trust in the Programmers and there is, after all of the steps above, tested code.
The Programmers improve the quality of the software. I continue my exploring.
From pairing with the Programmers, I know what tests already exist, how the parts interact and any potential pain points the Programmer had whilst writing the code and tests.
I take the information gleaned from the pairing session to compliment and build upon my test charters for the exploratory session.
In the refactor phase we may also review the testability and test coverage to ensure it is still efficient and effective.
For example, the Programmers may think that we can prove a requirement with a unit test rather than a UI scenario. We talk about the intent of the test as well as the pros and cons of pushing the test down the stack. What happens to the test is a group decision, not a unilateral decision.
The information about the design and behaviour of the software is shared both ways. Nothing is hidden and raising bugs becomes a far more lightweight exercise.
Closing the loop
I think of the development team’s close collaboration during the TDD cycle like stitches in clothing, where the stitches are like interactions and articles of clothing are the relationships. Closer stitches result in increased strength of the bond. With fewer stitches comes the likelihood of the clothing falling apart sooner.
When it comes to deciding whether a feature is “done”, the development team and Business stakeholders have feedback from numerous sources, not just the Testers.
The whole idea behind Testers “signing off” the code or being the gatekeepers to quality disappears. The whole team knows the state and behaviour of the software so all have an opinion on whether it is worthy of being released to the Production environment.
Here is the same image of the TDD cycle as displayed above, but I have added some examples of the types of activities Testers can get involved with throughout the cycle:
Tester-enhanced TDD cycle
What if your team doesn’t practice TDD?
The Programmers may not call their development practice TDD, nor write their test firsts, but hopefully, they are having discussions about the design of their code and you as a Tester can certainly get involved here.
Just because a development team are not using tests to drive out the design of the code does not mean they are not thinking about design. The tests are a nice byproduct.
If you and your Programmers are not creating a testing framework around the software to make it “self-testing”, then close collaboration with the Programmers upfront will help with the defect prevention.
Here are a few hints and tips that have helped me work closely with Programmers and get more involved with the design of the software.
Context Driven Testing
The situation drives the approach to testing.
I left a waterfall / staged delivery organisation to join an XP development team practising TDD. My previous approaches to testing were not applicable.
I needed to inspect and adapt in order to provide valuable testing in the new organisation. The Context-Driven Testing community really helped me to grow and understand what kind of Tester and team player I was.
Awareness of the Programmers craft
Programmers are generally proud of their work. Take some time to learn a bit about what the Programmers do, why they do it and how they got to where they are today.
You’ll be surprised how this interest soon gets reciprocated.
Be a domain expert
Own the domain you are testing in!
Be aware of who is going to be using your software and what problems they may have that this software is trying to solve. The closer we can represent our users, the better our testing can be.
A great book that helped me in domain and design related conversations was Eric Evan’s “Domain Driven Design”. A key point that is driven throughout the book is the use of ubiquitous language. A lot of ambiguity lives where business language is translated into “dev speak”. Understanding your domain, applying critical thinking and promoting ubiquitous language can help to drive out this ambiguity.
Demonstrate your value
My love for testing and close collaboration with other development team members didn’t just happen overnight.
It has taken some hard work and a fair amount of cakes, but through critical thinking, passion and proving myself on numerous projects I can now demonstrate what value I can offer in a Programmer-centric environment.
If I can do it, so can you.
You are a Tester in a software development team. You have an opinion on the design of your software; you just might not know it yet…
Duncan Nisbet is a tester who believes development teams & the larger business can be smarter at working together. He coaches Testers, Programmers & Business folk on how they can help each other communicate & collaborate in order to deliver software which will actually help to solve the problem. His efforts are focussed both during the everyday development or in team workshops which he facilitates. Follow him on his blog or via Twitter @DuncNisbet
Edit: After reading this article, Richard Bradshaw (@friendlytester) tweeted a nice & concise response:
Outcome: TDD isn't testing, but provides some great artefacts to support Testing. I also draw a picture. pic.twitter.com/knxsdjRdh0— Richard Bradshaw (@FriendlyTester) August 19, 2014