Community Stories: Who Is Your Customer? A RiskStorming Story
By Hannah Pretswell
I recently introduced my team (and myself!) to RiskStorming, a game that supports you in collaboratively identifying how to test risks that impact important quality aspects of your product. I had never (regretfully!) been to a RiskStorming workshop, but I had heard lots of great things about this team-based activity. With some interesting and complicated features lined up for us, I figured it would be a good time to experiment and try this newfangled way of creating test plans and discovering risks.
For our first RiskStorming session, we chose a simple refactoring task. Our task was to convert a fragmented model, which was populated with different data depending on the user's setup, to a common model which was clear and explicit in all setups. Often with a refactoring task, the temptation is to view it as “no functional change” and just ensure that the behaviour still works as it did before. And honestly, we were all feeling pretty sceptical about how RiskStorming this particular task would actually work. Would there actually be any risks to storm?
The Value of a Refactor Task
We all found picking the quality aspects for this actually quite difficult. It was the first time we had approached fully fleshing out a test plan for a refactor task, and with it being our first time using RiskStorming, we were all a little lost as to where to start.
We only ended up choosing five quality aspects instead of the six recommended on the board; Changeability, Stability, Testability, Data, and Functionality. Changeability was the main quality aspect that we focused on, since changing anything in this area was very high risk. This functionality was often forgotten about during development since the majority of customers didn’t utilise it. However the highest value customers used it a lot, and in investment banking, the highest value customers are the ones you really, really do not want to annoy. The other four quality aspects followed along from changeability - stability and testability were picked as we often had issues with tests failing, or tests missing, and issues cropping up in production. We chose data because of the risks with setting up the *correct* data, and functionality because despite it being incredibly important functionality for high stakes customers, nothing was well documented or understood. We had to do a lot of work to uncover different business flows.
Capturing the Risks
Through using risk-storming, and with a focus on changeability, we started to drill down into the risks there. This refactor task was to make our lives easier; we had been doing development around this area and uncovering lots of issues and were also expected to make many changes in the coming months to that part of the code. We realised after spending 10 or so minutes capturing risks that the majority of them were around development. The risk that in the future developers wouldn't understand how to implement the correct functionality when making changes, or would miss vital items, or use the wrong functions. And considering this was an area often forgotten about during the regression testing cycle after the release was cut, or missed during three amigos across all teams, we realised that we needed to tailor this change to current and future developers who would be working with this code.
One of the driving factors during this RiskStorming session was the fifth modern testing principle: “We believe that the customer is the only one capable to judge and evaluate the quality of our product.” There had been a Ministry of Testing Newcastle meetup earlier that week so this was on both mine and the other tester in the team’s mind. We began to question...
So who is our customer in the case of a refactoring task?
It wouldn’t be the user. The user should see no difference. What we realised was that the most value is for the developers. Obviously refactoring is for the developers, to make the code cleaner, easier to use and understand. But as a team, we delved deeper into this to see if there was anything we could uncover.
How can we be SURE that the developers are getting the best quality code with our changes?
Luckily, I was sat in a room with three of the people who would get the most use out of this change, the developers. The most important aspect they pointed out was that not many people even knew that there were different user setups. It was not well documented and very easy to overlook in the code - but also very easy to overlook when planning and testing work. Not just developers fell into the trap — everyone did. What this meant was that we needed to really make sure that the customers (the developers!) had the least risk of missing out on implementing all user setups, and that a rarely used setup isn’t misused.
How to Mitigate Risks
As a team, we came up with 3 things to help mitigate these risks — all of them pretty obvious — good naming, comments for an explanation, and an ESlint rule for the rarely used setups.
Good Naming: Naming things is the bane of every developer's life. In this case the current naming of the functions were incredibly complicated and convoluted - it was not clear what behaviour you would be implementing, and was often misused due to this. The naming of functions would be discussed and agreed upon with the business since this impacts all developers on the project.
Sensible Comments: Currently no comments are in the code to help make it clear what data will be populated, so adding in comments will help make it clear what each function is doing.
ESlint warning: A custom ESlint rule will be implemented on part of the model which is used very infrequently - but if misused could have disastrous consequences. This rule will have to be manually disabled - so should prompt conversations if used by someone who is unfamiliar with the code.
Coming out of this exercise, the team as a whole had a much better understanding of how the changes were going to affect not just the end-users, but the customers (developers in this case). Riskstorming gave us a platform for discussion around areas that we perhaps wouldn’t as a team - data set up, exploring business flows, and the impact of changes on development as well as end-users. It was an incredibly difficult exercise to do: being our first time as well as a refactor. We didn’t find the heuristic, technique, or pattern cards to be particularly useful whilst discussing the risks due to the team being already familiar with the product (and area) and how it worked.
After we implemented the changes, using the actions above, we didn’t notice any immediate drop in the number of bugs found - after all we didn’t fix any of the existing issues (though one or two did get fixed by proxy). What we did notice, however, particularly amongst our team, was the ease of use when making changes in that area. Developers were happier and more confident, fixing bugs became easier, and adding new features became less bug-addled (though one of our changes did cause a gigantic break in another team's work - caused by conflicting requirements given to each team, not the refactor thankfully).
Though I can’t say for sure if we wouldn’t have come up with the same solution without Riskstorming, I can say that it was a useful exercise. We got to explore options together as a team, rather than developers and testers separately, and the cards helped us consider points of view that we may not have otherwise. We have used Riskstorming since, but the conclusion of the team after using it a few more times was that we would only use it for brand new features (either to the product, or for us as a team), and when it came to refactoring tasks, we’d use the cards as prompts with only one tester and the developer who is expected to work on the task. I do wish that we could have had our Product Owner involved in the Riskstorming, and if you have that opportunity, I recommend you take it.
All in all, RiskStorming helped us to redefine who the customer is. It’s not just the end-user, it can be the developer too.
“We believe that the customer is the only one capable to judge and evaluate the quality of our product.” — Modern Testing Principles
Hannah is a Software Tester with a passion for trying out new tools and techniques on teams she works with. When she isn’t testing she is reading about testing and DevOps (and sci-fi and fantasy books), partaking in aerial fitness classes, and playing Guild Wars 2.