Reading:
Stop Tuning Out and Start Tuning In
Share:

Stop Tuning Out and Start Tuning In

A Software Tester’s Guide To Refocusing On The Details

It’s easy to find bugs when they’re staring you in the face: The feature you’re trying to test is missing, you land on an error page, or the data you’re trying to delete just won’t go away. When your expectations are explicit and defined, it’s straightforward to decide when something’s wrong and choose to change it.

It’s digging into the minutiae, the details, the things that everyone else is ignoring that make the job interesting and you, as the tester, valuable. The act of noticing what others have missed takes an open, patient mind. Through careful evidence gathering, focusing your attention at different levels of the application will reveal not just whether the feature is done, but how your users experience it.

Active Listening

Tools: eyeballs, notebook, concentration

I was working at a radio station that played classical music. We had playlist information for the pieces available online. The information was posted after the music aired to prevent people from recording a particular performance. In between the pieces, there might be a bit of the host speaking, an advertisement, or nothing at all, when one piece proceeded directly into the next. 

My mission: I had to figure out if the timestamps on the playlist were accurate enough for listeners to find what they’d heard. To test this, I listened to the music on the radio…for the exact opposite: everything but the music. I had to tune out the music and focus on the transitions to note the time when someone started speaking, stopped speaking, or the next piece started.

This seemed like an easy enough task. So I tried to multi-task. There was plenty to test on other projects or read online while the music was playing. But I’d listened to the radio in the regular way for so many years, paying attention to the music and ignoring the rest, that I found doing the opposite completely untenable. I could only keep track of the timings if I did nothing else. As boring and useless as most of the time felt for me, I stared at my screen as it slipped into the screen saver and just listened.

I kept track of playlist information at different times of day for a couple of hours a day for a few days that week. This was enough data to decide with the team that being accurate down the minute was enough to find the piece you were looking for. In most cases, the timestamp on the website matched what I heard. But my active listening uncovered a particular case when we got it wrong: when a host spoke for more than one minute in between pieces. We realized we needed to pull the playlist data from the output log from the studio, rather than from the predicted schedule. Without this meticulous data collection, we wouldn’t have been able to pinpoint this particular issue. Our playlists would get increasingly inaccurate as a host’s shift spilled over into multiple hours.

User Experience

Tools: Chrome Network tab, LICECap, Mac Preview

At a different company, we were towards the end of a several-month upgrade of a legacy product. We were worried about performance. (Why this took several months, what made the product legacy, and why we were worried about performance are a whole ‘nother story.) I was tasked with finding which pages and actions might be slow. 

For months, I'd been focusing on the feature once it loaded and seeing if the functionality misbehaved. For this test, I had to do the opposite: I had to be impatient. I started looking at the high-value, often-used features first, and found one that was slow. I can’t divulge the page I was actually testing, so we’ll look at what I did next using hackdesign.org instead. I highly recommend following their course to build your intuition about user experience design.

I needed to see the part where it loaded, how much time a spinner took to appear, and what redirects were happening. My eyeballs and notebook would not be enough for this one, so I enlisted some tools to help: the Chrome Network tab, a GIF maker called LICECap, and Mac Preview. 

With the Network tab open, I could do a few things: 

I could see what API calls were being made. I looked at the bottom of the “Name” column to see how many calls there were overall, and sorted it to discover if we were retrieving things from the server that I expected to be cached. I clicked the “Preserve log” checkbox before I started so I could see what happened even after I went to another page.

I could see which calls were redirects. The “Status” column had numbers in the 300 range for redirects. I love httpstatuses.com for what each one means more precisely. Redirects might indicate something could be optimized on our end.

I could tell how much time each of those network calls took. The “Time” column allowed me to sort by milliseconds to find the call that took the longest. 

I could change what kind of network connection I was simulating. Rather than the full power of our office network, I switched the “Throttling option” to “Slow 3G” to see what someone trying to access the site from a phone or tablet might experience. This also made finding the slow stuff much easier in the first place. 


What I used in the Chrome Network tab for this particular test.

I discovered logging in and logging out took about 20 seconds on the test environment over Slow 3G, which was significantly less than the 60 seconds the operations took over the same network in production. It seemed pretty slow to me still, and the slowest calls were about the message that appeared on the logged-in homepage. The team agreed that this behavior was bad, but given how unlikely it was, it wasn’t a priority for the release.

All these things I was able to do with the Network tab open after all the loading was complete. Recording a GIF with LICECap allowed me to see a few more things. 


A GIF made with LICECap and the Network tab open over a Slow 3G simulated network.

[Sidebar: I’ve also used the built-in Mac option of recording a Quicktime .mov. I find the GIF is a smaller file which uploads/downloads faster, is viewable by Windows user in a browser, and auto-plays in both JIRA and Slack.]

I could see which redirects that appeared in the Network tab also appeared in the browser bar. For cases when watching the GIF at regular speed was too fast, I opened the GIF up in Mac Preview so I could look at each of the frames individually.

I could also see what I’d been conditioned to tune out: the loading state. The moment between checking something and having the next page finish rendering had a variety of options: no feedback for the user, white screens, and loading spinners. Capturing how long each of those lasted was crucial for us to decide if users were having the experience we intended for them.

I watched the LICECap GIF a few times and found some unexpected URLs in the browser bar during the operation on our site. I tested a bit more around the feature in different places, found a bug in the behavior, and asked around a few different teams before I got the bug reported to the right people. 

Configuration & Integrations

Tools: notes, teammates, communication

We’re preparing for a big demo. There’s an authentication issue. Our test environment is pointing to the authentication system for production. For weeks, I go to the test environment, put in my test credentials, remember that those don’t work, and work around it by going to a different URL to login. This is annoying. It trips me up every time. But it works. I can get to our web application. 

People on the team know about the work-around too. They struggle, they get confused, and then they ask or remember. All is well.

Then people outside the team need access to the application. People I rarely see. People I never talk to. People whose troubleshooting habits I don’t know and whose concerns get filtered through a layer of management before they get to me. They try their test credentials on the test environment, and of course they fail. But this time, it’s seen as a problem.

As the tester on the team, I’m expected to put up with this issue that makes it harder for me to verify our product is working. It requires extra steps from my automated scripts and from my fingers. When I ask questions like “Why is it like this?”, “Can’t we fix this?”, or “Do we expose ourselves to GDPR concerns by using people’s production accounts?” aren’t important. We’re in the middle of building the product. Nobody wants to set up the few constants the configuration needs to set this right.

For the suits upstairs, it’s a blocker. They try to log in, and their credentials that they use for all our other applications on the test environment suddenly don’t work. They don’t ask “Can we fix this?”, they declare: “This needs to be fixed.”

 I was focused on our product, but not the users of our product. Even internal users matter. Configuration is code, and should be source-controlled the same way you source control your code. The day after the big demo, I wrote a story about pointing the test environment to the test authentication and move it to the top of the backlog.

For the playlists at the radio station, active listening allowed me to identify a violation of user expectations: our timestamps were listed down to the minute, but some timestamps were inaccurate. 

For the exploratory performance testing, recording the login experience uncovered an issue with the product’s image: the page was taking too long to load. 

For the authentication configuration, sharing our test environment outside our team showed how our product didn’t behave like comparable products at our company: a user’s test environment credentials work for other products on the test environment, but not ours.

It can be hard to focus when you’re testing, particularly when life has trained you to ignore the stuff. Keeping good records, whether just with pen and paper or with fancier software tools, will allow you to tune into the pieces you need and discover which heuristics reveal unexpected behavior. 

References

Elizabeth Zagroba's profile
Elizabeth Zagroba

Quality Lead

Elizabeth is Quality Lead at Mendix in Rotterdam. She reviews and contributes code to a Python test automation repository for 15+ teams building Mendix apps across three units. She builds exploratory testing skills by asking pointed questions throughout the unit, facilitating workshops, and coordinating an ensemble (mob) testing practice. She injects what she learns from conferences, books, and meetups into her daily work, and spreads her knowledge through the company-wide Agile guild she facilitates. She's presented at conferences throughout North America and Europe, and co-organizes the Friends of Good Software conference (FroGS conf http://frogsconf.nl/). She coaches people to success when possible, but isn't afraid to direct when necessary. She's the go-to person for things like supporting new presenters, reviewing documentation, navigating tricky organizational questions, and thinking critically about what we're building. Her goal is to guide enough testers, leaders, etc. to make herself redundant so she can take on new and bigger challenges. You can find Elizabeth's big thoughts on her blog (https://elizabethzagroba.com/) and little thoughts on Twitter @ezagroba.



With a combination of SAST, SCA, and QA, we help developers identify vulnerabilities in applications and remediate them rapidly. Get your free trial today!
Explore MoT
TestBash Brighton 2024
Thu, 12 Sep 2024, 9:00 AM
We’re shaking things up and bringing TestBash back to Brighton on September 12th and 13th, 2024.
MoT Foundation Certificate in Test Automation
Unlock the essential skills to transition into Test Automation through interactive, community-driven learning, backed by industry expertise