Reading:
Breaking the autopilot: How I stopped testing software like a machine
MoT Professional Membership image
For the advancement of software testing and quality engineering

Breaking the autopilot: How I stopped testing software like a machine

Shift your mindset to catch better bugs, collaborate more effectively with developers, and rediscover enjoyment in your testing work

Breaking the autopilot: How I stopped testing software like a machine image

The wake-up call 

I have been testing software for three years now. I’ve tested many types of applications, following all the industrial practices, running test cases, and checking every box. 

But sometimes I wasn’t thinking while I was doing it. I was just going through the motions, running tests like a machine. And that is how I ended up missing a critical bug that blocked both new and existing users from accessing the platform. That bug slipped into the production environment, and as the situation escalated, I realized that people were not happy with the testing process and the strategies being followed. After a couple of days, every team member's email exploded with complaints from customers, and my manager gave me that look. We all know that look. Not angry. Just... disappointed. 

I spent the next few days replaying everything in my head, wondering where I had gone wrong. I had tested all the scenarios in the correct way and followed all the steps. And yet I had completely overlooked how a real user might interact with the system. 

That was when I realized: I was testing software like a machine. But software is built for humans. 

The problem lies with "checklist testing" 

Following structured test cases is important, but relying only on them and doing nothing else creates blind spots. Here is what I realized: 

  • When you follow the same steps repeatedly, your brain starts tuning out. 
  • Real users do not follow test scripts. They make mistakes, take shortcuts, and do unexpected things.
  • The more rushed and distracted I was, the less I noticed small, strange behaviors from the application under test. 

I needed to stop running on autopilot. 

The shift: testing like a human

After that production bug led me to reexamine my behavior, I changed how I approached testing. I started focusing less on "Did I execute every step?" and more on "Do I understand what I am testing?" 

Here are three things that completely changed the way I work: 

1. The "what if" approach

Instead of relying on the assumption that the system would behave as expected, I started asking: 

  • What if someone copies and pastes an entire novel into this input field? 
  • What if a user switches between two accounts really fast? 
  • What if the connection drops right after the user selects “submit”? 

These small thought experiments led me to uncover bugs that scripted test cases would have missed. Below you’ll see a flowchart that represents my new thought process.

Flowchart showing a mindful, exploratory testing approach. The process starts with a box labeled "Start Testing," followed by a diamond labeled "Ask What-If Questions." Five branches emerge, each posing a different scenario: "What if input is too long?" leads to "Test with large data"; "What if user switches accounts quickly?" leads to "Simulate rapid switching"; "What if network disconnects?" leads to "Test for network failure"; "What if user submits invalid data?" leads to "Validate error handling"; and "What if multiple users access at once?" leads to "Check concurrency issues." All five actions connect to "Observe and Log Issues," which flows into "Report Bugs and Improve Tests," ending with "Refine Test Strategy."

This what-if flowchart above maps out a typical exploratory testing journey. It starts with “Start testing” and then moves to “Ask what-if questions.” This is where the actual exploration begins. From there, it branches into five real-world scenarios: 

  • What if the input is too long? While testing this, we test with long strings.
  • What if the user switches accounts quickly? This leads us to simulate rapid switching.
  • What if the network disconnects? We follow that to simulate network failure.
  • What if the user submits invalid data? This leads us to validate error handling.
  • What if multiple users access at once? This leads us to simulate multiple access.

All of these branches flow into the next stage, “Observe and log issues.” Here, we closely watch what happens and capture any unexpected behaviour. 

From there, we move to “Report bugs and improve tests.” And the flow ends with “Refine test strategy,” where we use what we have learned to improve future testing.

2. Explaining bugs out loud even if it’s only to my coffee mug

I started explaining issues out loud, not to a human teammate, but to my desk plant (who, let's be real, might be fake). It sounds ridiculous, but it actually helped. 

  • Saying things out loud helped me in getting clarity in my thoughts. 
  • I noticed gaps in my own understanding of requirements and testing approach. 
  • My bug reports became much clearer because I had already practiced explaining the problem.

3. Recognizing when my brain is stressed

I used to think that if I tested for long periods of time at one go, the better my testing would be. So, I kept going even when I was exhausted. But I was completely wrong. 

  • Now, when I feel my brain needs a break, I stop. 
  • A five-minute reset helps me refocus. 
  • Walking away from my screen makes me see things differently when I return. 
  • The weirdest bugs often show up when I am looking with fresh eyes. 

Going beyond the basics: Advanced techniques for better testing 

If you have been testing for a while, you probably know that breaking out of a checklist-driven approach is just the beginning. Here are a few deeper techniques I now use to level up my testing: 

1. Risk-based testing 

Instead of trying to test everything, I now prioritize based on impact and likelihood. I ask myself: 

  • Which areas of the application are most critical for users? 
  • Which changes in this release are most likely to introduce bugs? 
  • If something fails, what is the worst thing that can happen? 

This approach helps me focus my time on high-risks areas instead of spending hours on low-impact tests. 

2. Session-based exploratory testing 

Instead of randomly exploring the application, I now use time-boxed exploratory testing sessions with clear charters. I set a goal like: 

  • Explore how the system behaves when multiple users perform actions at the same time.
  • Test all possible failure scenarios for every critical module that might affect a user journey. 

This approach makes my exploratory testing more structured while keeping it flexible. 

3. Using AI and automation to assist, not replace, thinking 

I used to fear that automation would make testers obsolete. Now, I see it differently. I let automation handle the repetitive, predictable tasks so I can focus on the unpredictable. 

  • I use automation to run regression tests quickly. 
  • I use AI-powered tools to analyze logs and detect anomalies. 
  • I use scripts to generate test data so I can spend more time exploring. 

Instead of replacing me, these tools free up my time to do what humans do best: ask “why” and “what if?” 

4. Testing post-deployment 

I now test the product post-deployment. I take a "grandfathering" approach, where I create test cases before deployment and then complete the user journey post-deployment to make sure existing features still work as expected.

Here is why this helps:

  • Some issues appear only in production due to real-world conditions.
  • It ensures that legacy features are not silently broken by new changes.
  • It gives me a full end-to-end picture of how users experience the system.

Instead of just assuming that “everything works because tests passed in staging,” this approach helps me catch issues that traditional pre-release testing might miss.

The results: How everything has changed 

Since I started testing more intentionally, things have changed. My bug reports have become more detailed and insightful, making it easier for developers to debug issues. We now tend to catch critical bugs earlier, and the feedback loop between testing and development has improved. 

The real shift happened when my reports started sparking discussions. Instead of bare-bones defect reports, I highlight how users might experience problems. Testing now feels more collaborative, and my insights extend beyond simply following test cases. 

And the impact is clear: 

  • We have fewer client-reported issues. 
  • Critical bugs are caught earlier. 
  • Testing is now more about exploration than simply ticking boxes. 

But the biggest win? Realizing that I was not just testing the product, I was also genuinely improving the product quality. 

To wrap up

If you are feeling stuck in a robotic testing loop, try this tomorrow: 

  • Close unnecessary tabs and actually focus on the app. 
  • Ask yourself, "What is the weirdest thing a user might do here?" 
  • Explain a bug out loud before writing it down. 

It is a small shift, but it makes all the difference. Testing is not about being perfect. It is about seeing things differently. And once I started doing that, I never tested the same way again.

SDET
Software Tester | Quality Adovcate | OSS Enthusiast | Visit https://beinghumantester.github.io/ to know more about me.
Comments
MoT Professional Membership image
For the advancement of software testing and quality engineering
Explore MoT
Castelo Branco Meetup image
Tue, 6 May
The Future of Testing in an Automated World: Embracing Continuous Learning and A
Cognitive Biases In Software Testing image
Learn how to recognise cognitive biases, explain what they are and use them to your advantage in your testing
Leading with Quality
A one-day educational experience to help business lead with expanding quality engineering and testing practices.
This Week in Testing image
Debrief the week in Testing via a community radio show hosted by Simon Tomes and members of the community
Subscribe to our newsletter
We'll keep you up to date on all the testing trends.