Reading:
Heisenbugs: Handling software defects you can't reproduce
Professional Android Code Obfuscation Tools image
DashO offers professional-grade protection to add multiple layers of security measures to mobile applications, making it more challenging for hackers to successfully reverse engineer your app.

Heisenbugs: Handling software defects you can't reproduce

Reimagine irreproducible "Heisenbugs" as opportunities to strengthen your team's debugging practices

A desert landscape with mountains in the background and a recreational vehicle (RV) in the foreground. A character with sunglasses and a hat is standing next to a sign that says "Testing Lab"

"Adopting a mindset that views each defect, even those difficult to reproduce, as an opportunity to strengthen the team's debugging practices can lead to long-term improvements in software quality."

Now you see it, now you don't: what is a Heisenbug?

Have you ever encountered a defect that seems to defy logic and elude every attempt at replication? 

If you answered 'Yes', I can assure you, you are not alone. 

This type of defect can often occur under seemingly random conditions, meaning we have no reliable way to determine the steps required to reproduce the issue. Often, the only information we have to work with is a vague description such as "I encountered this issue while following a specific flow, but I haven't been able to reproduce it since." 

Because of this, these issues are often referred to as “irreproducible defects” or, as I recently found out, “Heisenbugs.” One of the defining characteristics of Heisenbugs is that any attempt to observe or debug the issue could potentially change the behaviour of the application code. The simple act of trying to observe the issue inadvertently changes the conditions under which it occurs. This will be discussed further below.

When paying closer attention makes things worse: the observer effect

The term “Heisenbug” is a playful pun on the name of Werner Heisenberg, the well-known physicist who first asserted the observer effect of quantum mechanics.

“The observer effect (not to be confused with the uncertainty principle) is the fact that observing a situation or phenomenon necessarily changes it. Observer effects are especially prominent in physics where observation and uncertainty are fundamental aspects of modern quantum mechanics. Observer effects are well-known in fields other than physics, such as sociology, psychology, linguistics and computer science.” The Observer Effect: IEEE publication, K. Baclawski et al.

An example would be checking your tyre pressure. By simply attaching a tyre gauge to the valve, you’re almost guaranteed to lose a very small amount of air. So simply by trying to read the tyre pressure, you’ll have changed the tyre pressure.

With this in mind, we can assume that the mere process of trying to reproduce a defect could change the behaviour of the code enough that the defect no longer occurs. 

An example Heisenbug: a story of failure to sync

How a synchronisation defect might arise

Imagine we have to test an application that uses multithreading, a method of execution that allows for multiple threads to be created within a process. Each thread executes by itself but shares process resources with other threads. Building such an application requires meticulous programming to avoid potential issues such as race conditions and deadlocks. Multithreading: Techopedia Article by Margaret Rouse

Within the application code, the developers create a function that modifies a shared counter variable. Ideally, the developers will use synchronisation to ensure that each thread reaches a known point of operation in relation to other threads before continuing. However, if synchronisation is not implemented, both threads will be able to modify the shared counter simultaneously. IBM Documentation: Synchronization techniques among threads

To simplify this, think of it this way. 

We have a counter variable that starts with the value ‘0’. Thread one updates the counter variable to ‘1’ and at the same time, thread two updates the counter variable to ‘2’. 

Which is the correct value?

In this scenario, there is simply no way to be sure, and this would likely lead to unpredictable behaviour within the application.

The observer effect in action

Trying to debug such an issue can be incredibly difficult. By adding print statements, logging, or using a debugger, the threads' timing could change enough that the manifestation of the race condition would be considerably less likely. Thus would give the illusion that the defect has disappeared.

Irreproducible defects or “Heisenbugs” like this can be a real headache for both developers and testers, since they often manifest unexpectedly and vanish without a trace. The sheer nature of these defects can make them incredibly difficult to diagnose, document and fix, and could lead to frustration between developers and testers. 

Dispelling the "It works on my machine…..." argument

Dealing with elusive defects that we can't reproduce reliably can be one of the most challenging aspects of software development. These issues demand not only technical expertise but also a strong commitment to teamwork and clear communication. All team members must approach such problems with an open mind, recognising that just because an issue doesn’t manifest in one environment doesn’t mean it doesn’t exist in another. 

Dismissing concerns with a casual "It works on my machine" mentality undermines the collaborative spirit necessary to find a solution and improve the overall quality of the project. Instead, these challenges should be seen as opportunities to deepen understanding, enhance cooperation, and build more resilient systems.

What's a software team to do? 

Irreproducible defects can stem from a variety of factors, including rare timing conditions, specific hardware or software configurations, or intricate interactions within the software. Understanding and fixing such defects requires a keen eye for detail, a systematic approach, and a lot of patience. Each attempt to debug the issue can feel like a lost cause, but patience and communication are key.

There is a fantastic article on the Ministry of Testing website written by Rahul Parwal. It's called “Taming The Beast Of Irreproducible Bugs: Finding Opportunities in Chaos” and has some great ideas for how to approach the investigation of such defects.  

But I’d like to discuss what happens if the defect remains irreproducible.

Understanding the defect's context as a team

“Context: The situation in which something happens and that helps you to understand it.“ 
- Oxford Learners Dictionaries

All defects that are raised require a deep and broad understanding of context. Context encompasses various factors such as the environment in which the software operates, the specific conditions under which the defect was discovered, and the sequence of actions leading up to the issue. Without this information, it can be incredibly challenging to grasp the full implications of the defect, and this could dramatically alter its perceived risk and impact.

Here are some considerations that may help to shape your approach and to determine if you should continue to debug an irreproducible defect:

Evaluating potential impact

  • Assess the potential severity of the defect. If the defect could cause data loss, security vulnerabilities, or significant user disruption, it may warrant further investigation even if it's hard to reproduce.
  • Consider how often the defect might occur. A rare but catastrophic defect may still justify a deep investigation, while a rare and minor issue may not.

Evaluating the return on investment of continued investigation

  • Evaluate how much time and how many resources are being consumed in trying to reproduce the defect. If the effort is disproportionate to the potential impact, it might not be worth pursuing further.
  • Consider what other work or improvements could be overlooked or delayed if the team were focused on replicating only one defect. Sometimes, focusing on known, reproducible issues or new features can be a better use of time.

Gathering all available information

  • Make sure the defect details are documented and communicated to the team. This should include the expected versus actual behaviour, the place where you saw the issue occur, information about the environment (such as software version, operating system, hardware, and configurations), the frequency of occurrence, and any other pertinent information.
  • Check if you have sufficient logs, error reports, or telemetry data. If there's little to no information, it may be challenging to make progress. As mentioned above you may not always get statistics and information for these defects, and so the team should not rely solely on the information provided by logging.
  • Sometimes the end user can provide valuable insights or patterns that can help reproduce the issue. If there's a consistent user report, it might be worth investigating further.
  • Seek advice from others inside and outside your organisation. You may not be the only person to experience the issue. Research it online, speak to other teams within your organisation and check for defect reports online (GitHub issues for example). You might find good documentation of the issue and how to reproduce it in repositories for third-party packages and open-source software.

Monitoring the system and mitigating the defect's potential effects

  • Is it possible to implement a workaround or safeguard that minimises the potential impact of the defect? For example, if you’re using a third-party package within your code and the defect resides within this package, could you implement a completely different package that resolves the issue? This removes the need for further investigation. 
  • Set up monitoring to capture more detailed data in case the defect recurs. This could provide clues for future investigation. 
  • Consider creating automation scripts to watch for specific words or objects and schedule them to run periodically throughout the day and night. Doing so may help provide good insight into what is causing the defect to occur. It may also highlight if the issue occurs at a certain time. This information could then be correlated to other services and actions that are triggered in the background, for example, backups or anti-virus updates. 

Reconciling team and stakeholder priorities

  • Consider if the defect would be detrimental to user or stakeholder experience if it appeared in production further down the line. If a defect is a high priority for users or business stakeholders, it may justify continued effort.
  • Make sure the team is fully aware and collaborate with them to decide whether to continue investigating. Sometimes a fresh perspective can make a difference.

To wrap up: Is it worth the effort to try to root-cause a Heisenbug?

The decision-making process about whether more time should be spent investigating a Heisenbug should be a collaborative effort that involves all relevant stakeholders, including developers, QA engineers, product managers, and even end users when appropriate. This ensures that diverse perspectives are considered, leading to a more informed and balanced outcome. Furthermore, adopting a mindset that views each defect, even those difficult to reproduce, as an opportunity to strengthen the team's debugging practices can lead to long-term improvements in software quality.

Ultimately, the choice to continue or cease the investigation should be made with a balanced consideration of risk, user experience, and resource efficiency. Addressing these challenges isn’t just about fixing the immediate problem. It's also about fostering a collaborative effort to enhance the development process, resulting in more resilient and reliable software.

For more information 

James Wadley
SDET
I break things...
Comments
Professional Android Code Obfuscation Tools image
DashO offers professional-grade protection to add multiple layers of security measures to mobile applications, making it more challenging for hackers to successfully reverse engineer your app.
Explore MoT
TestBash Brighton 2025 image
Wed, 1 Oct 2025
On the 1st & 2nd of October, 2025 we'll be back to Brighton for another TestBash: the largest software testing conference in the UK
Bug Reporting 101
A quick course on raising bugs
This Week in Testing
Debrief the week in Testing via a community radio show hosted by Simon Tomes and members of the community
Subscribe to our newsletter