Experience Report: Introducing Exploratory Testing

So you’ve heard of Exploratory Testing (ET). You’ve heard the PR and thought it sounds interesting. Yeah, that sounds like something I’d like to try.

Maybe you’ve read some of the literature in blogs (lots of people talking about it), web-sites (for example the context-driven school?), books (James Whittaker came out with one in the autumn) or some other buzzword being discussed in conferences or on twitter.

Whichever way that you’ve come across ET how do you get started with it and introduce it into a team or organisation? Well, this is something I did recently – so think of this as a case-study or experience report.

Why introduce it now?

There was a need to test/assess a feature in a more complex environment (middle picture below) as there were problems doing it in the full customer-like environment (right-hand picture below.) Here was an opportunity to re-use some previous experience of ET and introduce a “structured active learning/test” activity with the emphasis on quick start-up.

Our goal was to assess the big picture (below, as well as smaller individual configurations) and conclude if we should do something, how we should do it and the when was … “as soon as possible, if not sooner”. The “if ” we should do something was in the context of the other testing that had already been conducted on the feature.

Time for a new intro? Analysis…

We set up a small team that started analysis on the target feature – what it meant in an isolated node and what it meant in a larger system. This was an opportunity for us to get to know the feature and other aspects of the environment (there were multiple features changing at the same time.) Besides the test team we involved various designers and system managers to understand different aspects of the “problem”.

With regard to the picture below we were looking at the middle environment – a “small” network where a single network element (NE) was implementing a range of features and we were looking at interactions between NE’s as well as the core use-cases in the NE system under test (SUT). The main reason for introducing the activity was that the target feature interactions were due to be tested in the “fewer simulated interfaces” environment.

After some sessions involving different types of questions, investigations and follow-up sessions we came to the conclusion that there were some “interesting” cases/scenarios to look at. These were areas of the functionality and system configuration where we saw that we could add some value – we identified some “risk areas” – areas where we thought “well we don’t know if this will work when we plug in this piece of hardware”.

The whole analysis phase was looking at different aspects of the problem – understanding the end-user requirements,
simplifications we were making in the environment (e.g. aspects to consider by simulating parts of the network) and what it might cost to test in the environment we wanted.

The analysis ended with a “go decision” – we had an OK cost for the simulation effort and we’d justified the need for the testing by identifying some “risky” areas that we thought we give us some benefit to explore.

ET in action

When we started the activity I didn’t announce “this is going to be an ET activity or we’re going to do some ad-hoc testing”. I framed the work in terms everyone was familiar with: We started with an analysis activity, evaluating some useful/valuable use cases and using those as a starting point to investigate/test in an existing lab.

My task was to keep the simulation development activity in synch with the test execution and other lab scheduling, discussing approaches to testing different scenarios and ideas for further investigation.

Another aspect of my role was more of a “facilitator” – guiding the group, maybe setting some of the starting points and being there as a ball-plank for ideas, but it was the team that were making the interesting discoveries about the product – the unexpected feature interactions which turned out to be bugs! I was their to trigger some of the initial curiosity and enthusiasm about exploring the product but it was ultimately the team that did the exploring.

Time to stop

So, how did we decide when to stop? We had sketched some key use cases that we wanted to test (know more about). After
we started the first test sessions we were able to re-prioritise those scenarios, change some, add some more scenarios and begin to add some test design details. But it was this prioritised list that was our main driver – we also had a certain time window in which to operate.

Lessons?

Frame the problem. Get to know the problem. This is a great opportunity to brainstorm (one of my favoured starting points).

Involve a wide scope of people to capture lots of different aspects – understand the problem/feature from the design perspective as well as the enduser/stakeholder perspective.

Try to understand/estimate the costs of the environment/configuration you want to work in: what simplifications/simulations are being made – does this make some test scenario unrealistic or not worthwhile, is the cost of the simulation going to delay the project.

Process/Structure

Don’t go overboard on the “process” or structure of the work. Start gently, learn and adapt the “process” to your own needs – this may be more about adapting your existing work processes so that they are more ET-like. Take as big or little steps as you feel comfortable with.

Set out your own objectives and following-up “how” you meet them (you may decide that something is not working and it’s time to re-think how to look at the “problem”).

Be sensitive with the team dynamics. Relate what you’re doing to new ground – this has worked for me in the past.

In the beginning I didn’t have all the details about how I wanted to run the ET trial. That’s OK – it’s more about being able to adapt as you go. I started the team from the analysis viewpoint. We decided there was a “business case” to look at some scenarios – and the team focus became how do we learn about these use-cases and test them rather than “how to we follow our exist-ing/known processes to do this work”.

Change is good?

Not everyone was comfortable with a “less-structured” approach. But it wasn’t really less structured – it was just structured in a different form. We had regular follow-up meetings, we documented our progress and issues in a common/centralized way – we had people available/working different days – so it was important to have a central DB of issues, questions, results and we supplemented this with email updates to each other (keeping everyone in the loop).

It’s infectious!

Once you start a team looking a problem from a new angle it doesn’t take long for the ideas to be self-generating. Our testers have a wide range of freedom in following up “hunches” but this really took off in this exercise. The team would change tack during the test session quite dynamically. We evolved a set of target tests that we wanted to perform but these were added to and modified quite dynamically.

Learning is good!

We learnt about areas of the product/function that hadn’t been seen in some previous test phases – partly due to using different environments, tools and configurations (these weren’t available to the other phases.) We found some key interaction bugs that would not have been so pleasant to a customer.

There is also a key learning here – you have to want to learn and explore! If you want to try ET just for the sake of it then it’s probably not going to work.

Summary of things to think about

  • High-level use-cases are the starting point
  • Understand your scope/environment/configuration and how that relates to the main use-cases
  • Don’t put more effort in the up-front test design than is needed to get started
  • Experience in ad-hoc/ET is good, but getting started and learning from mistakes is just as good
  • Document what you do – the whole activity is a feedback loop – you may discover some use-cases are no longer so important for the scope you’re working in
  • As Nike say, “just do it”.

Would I recommend and do it again?

Absolutely.

I think it’s good for organisations to try out new ideas and for them to put their own spin on it – to make the activities “theirs”. The activity is intellectually challenging – you need to have some good objectives before starting and even be able to motivate the need for it. It is also challenging during execution to switch course during, follow-up on new leads or try a new scenario or “whatif ” case.

So, yes, give it a go…

This article was by Simon Morely. He blogs at: http://testers-headache. blogspot.com/

You might be interested in... The Dojo

Dojo Adverts-05

Tags: , ,