Architectural diagrams image
  • Ady Stokes's profile image
Architectural diagrams are visual maps, or blueprints, that show how a software system is put together. They highlight the key components, how they connect, and how information flows between them. You don’t necessarily need every tiny detail of the system. The value comes from seeing the structure clearly enough to understand the system, ask relevant questions, identify potential risks or gaps, and understand how changes and data might travel through the system. A helpful diagram makes the often invisible visible. It shows where the complexity lives, where bottlenecks might appear, and which parts of the system you’ll want to explore, test, or monitor more closely. A diagram can play an important role in understanding a system's testability. Whether the diagram is a simple sketch on a whiteboard in a meeting to help a discussion or decision, or a formal diagram in a tool, the goal is the same. A shared understanding of the system that supports teams making smarter quality and testing decisions.
Software architecture image
  • Ady Stokes's profile image
Software architecture is the high-level shape, or blueprint, of a software system. Architectural decisions, existing circumstances, or context constraints lead to a set of decisions that define how the parts, or components, fit together, how they communicate, how data is used and flows, and how the system evolves, changes, and stays reliable over time. Architecture, most often viewed via architecture diagrams, gives us the big picture so we can make smart choices about design, testing, performance, security, and where the risks sit. You don’t need to know every detail to benefit from understanding architecture. Even reading and understanding a simple view of the main components, their responsibilities, and the data flows can help you ask better questions, spot gaps earlier, and plan testing that aligns with how the system actually works.
Test Doubles image
  • Rosie Sherry's profile image
Test doubles are a general category that includes mocks, stubs, fixtures and spies. Their purpose is to make writing a test easier by substituting an inconvenient aspect of the application with a test-friendly alternative.
Rollback image
  • Emily O'Connor's profile image
A rollback is the re-deployment of a prior version of code. It is often performed as a fast-fix to avoid negative user impact when a regression bug or other defect is found quickly following a deployment. A rollback is not required if the new code or feature is hidden behind a feature toggle which could be turned off.While rollbacks avoid negative user impact and increased downtime, they are not always adopted as it requires increased developer effort as a rollback is not a permanent fix. Rollbacks need to be communicated effectively to ensure that future deployments don’t redeploy the buggy code back to production, which can also slow down other developers’ work. It also requires coordination with other developers to avoid introducing any breaking changes between the current commit and the rolled-back version.
Fizzy minds image
  • Clare Norman's profile image
A fizzy mind is when your thoughts bubble and spark like your favourite carbonated drink. You may feel that you are overflowing with energy and imagination. It often happens in the midst of lively conversation with peers, where each person’s perspective collides with another, igniting fresh insights. One idea fizzes into the next, cascading into a chain reaction of creativity until your mind feels carbonated, sparkling and joyfully alive with possibility.
A/B testing image
  • Ady Stokes's profile image
A/B testing is a controlled way to roll out a new software version to a subset of users so you can monitor real-world behaviour and limit its use while you gather valuable information about user interaction. Through this technique, you can learn which version of the software is more successful or more liked by users.Sometimes called an A/B release, it means you ship two versions of your product and route traffic so some people see the old version and some see the new one. Think of it as a staged rollout with built-in measurement, so you can stop or roll back quickly if the new version causes bugs or does unexpected harm. It is a broader experimentation method that compares variants to learn which performs better.A simple example of an A/B release:You want to change the checkout flow on an e-commerce site, but you are worried about payment failures. You release the new checkout to 10 per cent of live traffic while 90 per cent keep the old flow. You monitor payment success rates, error logs, and customer support tickets for that 10 per cent, and if problems arise, you halt the rollout and fix the issue before increasing exposure. This is an A/B release because the goal is a safe rollout and operational control, not a pure hypothesis test on conversion rates.Here are some practical tips when considering using A/B testing. Keep the metrics you watch simple and relevant to safety and user value. Automate rollback triggers for obvious failure modes, and treat the rollout as a learning loop where customer reports and small qualitative notes matter as much as conversion numbers. The two main uses are protecting customers and systems, such as in the e-commerce payment example.  And when you need a clear answer to a product hypothesis of what will work best. E.g a button that says 'buy now' and one that says 'special offer'.
Software change image
  • Ady Stokes's profile image
This is the code-and-release side. It includes updates to features, bug fixes, configuration changes, infrastructure tweaks, deployments, and anything that alters how the product behaves. Managing these changes well means making risks visible, validating assumptions, testing early, and ensuring releases move through a predictable, safe path into production. In many places, this is done by tickets and committee discussions, informed by information and risk analysis. The greater the risk in industries like finance, medicine, and aviation, the more likely the process will be highly formal.When done well, it should be seen as a way to bring clarity and shared understanding, so teams can move faster with fewer surprises. If done poorly, it can be overly complicated, cause delays and frustration, and add costs. When we handle change intentionally and well, whether it’s how we work or what we ship, we build trust, reduce risk, and make quality a natural part of our delivery.
Environmental change image
  • Ady Stokes's profile image
This is the people-and-process side. It covers shifts in team structure, new ways of working, changes in organisational priorities, and the introduction of new tools and practices. These changes affect how we think, communicate, and collaborate. Good environmental change management helps teams understand what’s happening, why it matters, and how it will affect their day-to-day work. It reduces friction, builds confidence, and gives everyone space to adapt.
Change management image
  • Ady Stokes's profile image
Change Management is the structured way we help changes land safely and sensibly, rather than leaving teams to deal with surprises. It comes in two main flavours, and both matter if we want to build and deliver quality consistently.
Sequential development models (SDMs) image
  • Kat Obring's profile image
A family of delivery approaches where work moves through clearly defined phases in a fixed order, such as requirements, design, development, testing, and release. Each phase must be completed before the next one begins, and changes late in the process are costly because they require revisiting earlier stages. These models assume stability of requirements and low variability in the delivery process.
Agentic enterprise image
  • Aj Wilson's profile image
A business model where humans and intelligent AI agents work together to improve efficiency and decision-making.Unlike traditional automation, agentic AI doesn’t just follow rules - it can reason, adapt, and act autonomously. These AI agents handle complex, multi-step tasks through a continuous cycle of perception, reasoning, and action, enabling dynamic problem-solving.For quality engineers and testers, this means: AI agents assist in testing and quality processes, reducing repetitive work. Humans focus on strategic, creative, and high-value activities. Potential impact - it is proposed that there will be better employee experience, faster delivery, and improved customer satisfaction.
Availability Heuristics image
  • Emily O'Connor's profile image
Availability heuristic is a cognitive bias where people tend to rely on immediate examples or information that come to mind when making decisions or judgments. This can lead people to overestimate the likelihood of events or outcomes based on how easily they can recall similar instances, regardless of how rare or common those events might actually be.In software testing, here are three examples: A persons tendency to lean towards using a testing tool, or methodology simply because they used or learned it recently, even when a different tool or approach would be more suitable. When encountering a bug, a persons tendency to assume its cause is something they’ve seen recently while the actual root cause could be a different issue entirely, especially if that is a more obscure issue. Using the same set of functional test cases across multiple projects without adapting them to the unique requirements or challenges of the current project.  The availability heuristic should be overcome by discussing ideas with peers (who might be able to provide different perspectives or tooling choices), favouring data-driven decisions (using data to understand root causes or prioritise testing efforts) and using risk analysis to guide testing efforts i.e. Focus on high-risk areas of the software based on the business impact, user behavior, and technical complexity, rather than relying on gut feeling or past experiences alone.
Subscribe to our newsletter
We'll keep you up to date on all the testing trends.