Reading:
Tips For The Lone Tester: Challenges With People
Share:

Tips For The Lone Tester: Challenges With People

Tips For The Lone Tester: Article 2 of 4

Content in review

Once you've found your feet and become reasonably familiar with the applications you test, it's time to start looking at the challenges you’ll face. As the first tester, and likely the only tester, there's probably an abundance of challenges to overcome, and nowhere near enough time to work on them.

Before Starting: Get Organized

One of the many ways of maintaining progress on any issues you might face could be to create a personal board/list/ToDo collection. It comes in handy and can give you something to help keep your long-term plan balanced with your short-term priorities without losing track of anything. If you can recruit developers to help you, so much the better. This is your long-term plan, so it won't happen quickly, and the more allies you have the more likely your long-term goals will come to fruition.

A common challenge of a first-time tester could be but are not limited to: priority juggling, cultural and/or communication issues, legacy code, unreasonable deadlines, inflexible protocols or procedures. Here are a few ideas to handle, or at least start to understand, these particular roadblocks.

Priority Juggling

Defining a list of priorities should be one of the first activities a new tester asks for or has defined for them. Having a prioritized list can keep most things from being more urgent than everything else. Use anything that can help manage your priorities. Some great examples are whiteboards, sticky notes, note apps, or anything that could help with a constantly changing priorities list.

The exact prioritization will vary depending on the way your company works. Some companies have defined priority levels based on what the issue is and who it involves. If your development team doesn’t have a known priority definition list, use the following example to help prioritize work and then modify it as the team or business deems higher priorities over other items. You can even ask if something is higher or lower priority using these points as an example.

Priority Example List Based On Risk:

Anything which is of critical value to you or your customer and stops business.

Network outage

Critical workflow defect (e.g. can’t collect or charge people money)

Data loss

Anything which has a looming deadline or could affect critical business function.

High priority defects

Release or Gold Build Testing

Work which has other dependencies or could cause a slowdown or block for other teammates

Testing/verifying current projects, features, and bug fixes.

Anything that occurs in a regular cadence (e.g. sprint work)

Anything required to run on regular cycles (e.g. fixes to automation or DB refreshes)

Anything else which doesn’t rank above the other priorities.

Process improvements

Creating or refactoring automated checks

Creating or refactoring automated tools

Create or revise documentation (e.g. test cases, test plans, and reports)

Eventually, someone asks how you are determining what issue to look at first. Offer your priority list, or the example above, and explain that you are using it as a guide to help you focus. If they would like to make other things a priority or change how you prioritize your work, be open to that feedback. The end goal should be having your team, and then possibly the whole organization, adopt the same priority list so that different departments can act in concert, not just run around screaming with their hair on fire anytime something happens.

If you have access to the customer service team, ask them how they currently prioritize defects. While customer service prioritization is generally based on the impact of the problem, any heuristics the service team can help to give you a starting point along with a view of the kinds of problems that are seen as most important to your users. If customer service doesn’t have a way to prioritize things, offer to help them create a similar list. This opens up the organization to the idea of, and demonstrates, why it’s important to address things in a priority order which has a real business impact and/or value over guessing what might be important at any given moment.

What We Have Here Is A Lack Of Communication

When you start as the first/solo tester, your team probably has known, clear lines of communication which don't include you. It's also likely that they aren't going to know what kinds of things you need to know.

Your challenge in this situation is two-fold: first, you need to model the kind of communication you need and want in the questions you ask, down to what you're looking for in the answer. After that, you can begin the much slower process of convincing your coworkers to initiate the kind of communications you need.

An example like this goes a long way:

"I'm having trouble working out how ProjectX fits into ApplicationY. I'd really appreciate you pointing me to any documentation that covers ProjectX's role in ApplicationY, especially user requirements/use cases/whatever your company calls them."  

Sometimes, the communication challenges can be complicated by ideas about testers and testing that aren't exactly accurate. I've found the most challenging communication issues can come if the company culture encourages developers to regard testers as either a sanity check that happens just before release or an impediment to their goal of producing wonderful code.

My Developers Think I'm A Nuisance

It can be extremely challenging to work with developers who aren't used to collaborating with testers, or who hold some of the common misconceptions about testers.

There are several problems that can arise in this scenario, and I'm going to focus on three that I've personally found particularly irritating.

We Have A Tester: I Don't Need To Test My Code Anymore

Developers metaphorically throwing their code over the wall between dev and test tends to be a short-lived problem, simply because most developers are mature adults who take responsibility for their work. It's also one of the more easily handled communication challenges that can occur while developers adjust to having a tester to work with.

As a general rule, developers hate getting their work bounced back to them because it has problems. When you send it back with a polite note that you couldn't open the application/feature because it crashed when you tried, you're not likely to have to repeat that cycle with the next feature the developer works on.

Similarly, I've found it doesn't take developers long to realize you're going to check things in a certain order every time and they check those things so they don't get an issue because something like the enabled field tab order is wrong. Or because the field allows a greater input length than the database field so the system crashed. It goes on and on, through the list of checks you perform because you tend to see these issues first while testing a new feature. Eventually, your list will evolve as developers make sure those things don’t break again.

It helps even more if you let your developers know ahead of time what things you will test every time. In my case, the list of "every time" tests includes things like:

Tab order is sensible, consistent with the rest of the application, and starts on the first enabled field on the screen.

Labels are correctly spelled.

Screen layout is consistent with the application and matches company aesthetics

Any defined keyboard shortcuts work

Form fields do not accept data the database can't handle (e.g. inputs that are too long for the database field, non-numeric input that is stored in a numeric format)

Save/Cancel/any other navigation or actions work at all the major access levels the application supports.

A little patience and a lot of communication will see the amount of obviously untested code that comes your way fall very quickly.

It's Not In The Use Case: It's Not A Bug

Developers who've been in an environment where they get bad performance reviews if bugs are reported for features they're working on tend to be prone to this communication challenge. If taken to an extreme this can lead to developers claiming that the application crashing on load isn't a bug because the use case doesn't explicitly say the application has to run (I have actually seen this happen).

Usually, when I'm faced with a developer who tends to treat the use case as the source of all truth, I'll message them first, describe the situation, and ask them how they would like it reported. Sometimes I get a "Don't worry, I'll get you a new build", so there's no need to report anything. Other times I'm asked to report as a bug, and sometimes as a new user story.

Unless you've been unable to convince your manager that using bugs as a performance metric is a really bad idea, it shouldn't matter how it gets reported or fixed, as long as it does get reported and/or fixed. Even if you do change the performance metric, what matters is that an issue gets fixed. Working with your developers' quirks about how something should be reported is a relatively easy way to get any issues you find fixed in a reasonable amount of time.

How Dare A Mere Tester Get Technical With My Code

This particular communication problem is rare (I've only encountered this once)  but you do occasionally get a developer who gets offended by a tester having technical skills and looking into the code to trace problems. From what I've heard, these developers have often spent a lot of time in environments where testers are expected to follow detailed "Click this link. Enter text 'AbCd' into FieldA. Click the 'Xyz' Button" scripts.

When Doing Your Job Is Offensive

At the extreme end of the spectrum, I've dealt with a developer who was so deeply offended by my use of technical tools to find issues with his code that he spent two weeks trying to fix a problem (after arguing that it wasn't really a problem) I'd reported rather than consider the suggestion I'd made, before ultimately having to use my suggestion. That was a time I'd have welcomed having other testers in the building just for the extra support.

If you're unfortunate enough to find one of these developers, your best course of action is to give her what she wants. Don't do anything technical with her work. Just test it manually to the best of your ability, and document any issues you create for her work as cleanly and professionally as you can.

Above all, be polite and professional. As long as you are doing your job to the best of your ability, and working with your manager to ensure that there's good communication between you, it doesn't really matter how technical your testing does or doesn't get.

What matters, particularly as the only tester (since there's nobody to back you up if you're ill), is that the application is as good as the team can make it when it's released.

Deadlines And Their Masters

When you start as a company's first and only tester, you're going to quickly discover there are multiple conflicting demands on your time and attention. You're trying to learn the application domain context, start documenting the things that need documenting, figure out what those things are, and, in extreme cases, find a low-effort way to integrate your activities into a process that evolved without you. It's a challenge. It will take time and a great deal of patience. It will take the most ruthless prioritization you have ever done in your life (unless you have small children, in which case you have ruthless prioritization down and can skip to the next section).

You may also be supporting more than one group of developers, or you may be asked to step into one or more other roles. When you're the only person there to test for different groups it can be difficult to work out whose deadline is most important.

To borrow from some of the really old cookbooks, you must first “catch your chicken.”

Ideally, your manager will negotiate, with anyone in need of your time, to decide where the organization's highest priorities lie. If your manager isn't able to do this, you will need to triage based on your priority list. You might have to explain your decision-making process and then explain that if you do Task Z now, Task A could be delayed, and you understand that Task A is a business priority so it has to come before Task Z. In addition, once you've grown familiar with the flow of implicit and explicit priorities, you can start shifting some of the things you need into your floating priority list.

Explicit priorities are the ones that are written down. Implicit priorities are things like if the CEO says “we need to do X, drop everything else and do X" and will vary from company to company.

I've found that most people are willing to accept compromises like this if they are presented neutrally and a reasonable time frame is agreed on.

Some other techniques I've found helpful include:

Make your priority list visible. If someone requesting your time can see your current priorities and due dates, they are usually more inclined to agree when you can't fit them in right now.

Communicate your status. Someone waiting on a lengthy task will be more patient if they're getting regular updates on your progress.

Communicate your scheduling. If you are working on several projects, make sure each project team knows your schedule for work on their project. This will help to limit the amount of task switching you need to do.

Keep notes on what you're doing. As the first/only tester in the organization you're likely to need to switch tasks more than someone in a testing team (because if there's an emergency fix that needs to be tested now you're the only person in that queue).

Keep a floating list of priorities. It doesn't have to be huge. It just helps to have that visual reminder that as of right now your top priority is to test Feature B. Your top priority may change five minutes from now (which is why you take notes so you can pick up more or less where you left off with Feature B when you're done with the emergency)

Remember the “rule of three.” If you have to research it or experiment with it or ask about it three times, it's something that you should document (also if you mess it up three times).

Aim for the 80/20 rule. Eighty percent of your time and effort should go to the 20 percent of the application that gets the most use. This doesn't need to be exact, but it's a good guideline for where there's less tolerance of bugs.

The Organizational Straitjacket

No matter how wonderful your employer is, you're going to find there are things about it that constrain you. Maybe there's a limited budget, team politics that don't work for you, poor office design, or communication issues between teams. There are many aspects of a company that won't be visible to you until you join them. Some examples might be cultural or organizational in ways that you might find familiar from one company to the next, and others might be so extreme you’ll wonder how they are still a functioning business.

The Clashing Of The Culture

No matter how good the organization is, there's a good chance that you, as the tester, are going to find a lot of aspects of the company culture are in conflict with your testing values. Perhaps they've always used a strict waterfall methodology. Or maybe they've always tested by having business analysts go over the software before release.

As the first tester in the company, you need to start by fitting in with the whatever the existing culture happens to be, in the least harmful way that doesn't conflict too much with your testing values. It will take time. You need to prove yourself before you can change anything, and even then it will be a slow process. You want to make friends with your coworkers, and show them how you add value to your team.

You can start book clubs, start team lunch-and-learn sessions, share interesting links you've found with your coworkers… the list is endless. Above all, be patient. Real change doesn't happen unless the people who need to change their development process actually want to change it.  

More Challenges

In addition to the challenges that come with helping your coworkers understand what you do, you can face a number of technical challenges. The next instalment of Blazing New Trails covers some of the common challenges with software the first tester in an organization can face.

Where Can I Get Support?

Ministry of Testing - I use the Ministry site as a portal to The Dojo and The Club. The Ministry's Slack channel is also awesome.  

SQA Stack Exchange is a great place for specific questions

Jing is my favorite free tool for quickly getting screenshots and creating short screen-capture videos.

Working Effectively With Legacy Code by Michael Feathers is an excellent book about legacy code.

An Inappropriate Use of Metrics by Martin Fowler describes the flaws of using bug counts as performance metrics.

Introduction to Testing Notes by the Ministry of Testing

Kate Paulk's profile
Kate Paulk

Systems Quality Analyst

I like to refer to myself as a chaos magnet, because if software is going to go wrong, it will go wrong for me. I stumble over edge cases without trying, accidentally summons demonic entities, and am a shameless geek girl of the science fiction and fantasy variety, with a strange sense of humor. Testing for more than 15 years has done nothing to make my sense of humor any less strange. I have a twitter account which I mostly ignore, and a Facebook account which I also ignore. If there's anyone who is worse than me at social media, I haven't met them. The same applies to my very intermittently updated blog (which I've been meaning to get back to for... more than 3 years now)



Testing Ask Me Anything - Diversity in Testing
Your Weekly Testing News - Issue 436
Quality Jenga
A Tester’s Paradise
99 Second Talk - Lee Hawkins - Working with offshore testing teams: bridging the cultural & language divide
Introduction to VR Testing
Testing Your Career Path
I've Got A Feeling: Thoughts About Myself and The State Of Testing - Ash Coleman
Tester Types - The e-book
With a combination of SAST, SCA, and QA, we help developers identify vulnerabilities in applications and remediate them rapidly. Get your free trial today!
Explore MoT
TestBash Brighton 2024
Thu, 12 Sep 2024, 9:00 AM
We’re shaking things up and bringing TestBash back to Brighton on September 12th and 13th, 2024.
Improving Your Testing Through Operability
Gain the tools you need to become an operability advocate. Making your testing even more awesome along the way!