Aj Wilson
Quality Engineering Manager II / Technical Development Lead / Chief Quality Officer
She/Her
I am Open to Write, Teach, Work, Speak
Next Gen 'Generalist' - Quality and Testing Leadership for over 20 years.

Achievements

Career Champion
Club Explorer
Bio Builder
Avid Reader
TestBash Trailblazer
Article Maven
MoT Community Certificate
Trend Spotter Bronze
99 Second Speaker
The Testing Planet Contributor
MoT Streak
In the Loop
Bug Finder
Collection Curator
Glossary Contributor
Meme Maker
Photo Historian
TestBash Brighton 2024 Attendee
Cert Shaper
TWiT Host
Meetup Contributor
Pride Supporter

Contributions

STEC is complete 🎉: 19 modules, 59 voices, and a jam packed portfolio to show your growth image
  • Cassandra H. Leung's profile
  • Beth Marshall's profile
  • Jenny Bramble's profile
  • Lisa Crispin's profile
  • Suman Bala's profile
  • Mark Winteringham's profile
  • Dan Ashby's profile
  • Janet Gregory's profile
  • Lena Nyström's profile
  • Elizabeth Zagroba's profile
  • Jenna Charlton's profile
  • Melissa Eaden's profile
  • Maaike Brinkhof's profile
  • Ash Winter's profile
  • Hilary Weaver's profile
  • Nicola Lindgren's profile
  • Beren Van Daele's profile
  • Gwen Diagram's profile
  • Jesper Ottosen's profile
  • Louise Gibbs's profile
  • Parveen Khan's profile
  • Daniel Knott's profile
  • Simon Tomes's profile
  • Sarah Deery's profile
  • Ady Stokes's profile
  • Christine Pinto's profile
  • Oleksandr Romanov's profile
  • Aj Wilson's profile
  • Lewis Prescott's profile
  • James Wadley's profile
  • Brittany Stewart's profile
  • Melissa Fisher's profile
  • Joyz Ng's profile
  • Scott Kenyon's profile
  • Marie Cruz's profile
  • Emna Ayadi's profile
  • Ben Dowen's profile
  • Veerle Verhagen's profile
  • Rosie Sherry's profile
  • Mirza Sisic's profile
  • Richard Adams's profile
  • Julia Pottinger's profile
  • Rahul Parwal's profile
  • Callum Akehurst-Ryan's profile
  • Mahathee Dandibhotla's profile
  • Karen Tests Stuff's profile
  • Barry Ehigiator's profile
  • Rabi'a Brown's profile
  • Jesse Berkeley's profile
  • Hanisha Arora's profile
  • Philippa Jennings's profile
  • Kat Obring's profile
  • Nataliia Burmei's profile
  • Judy Mosley's profile
  • Hanan Ur Rehman's profile
  • Emily O'Connor 's profile
  • Manish Saini's profile
  • Maddy Kilsby-McMurray's profile
Eighteen months, 19 modules, and 59 amazing contributors later, the MoT Software Testing Essentials Certification is complete! Looking back, my favourite part has been seeing so many community m...
Quishing image
  • Aj Wilson's profile
Heard of Phishing? Quishing is QR code phishing. Criminals stick fake QR codes over real ones. You scan what looks like a restaurant menu and BAM – you've just downloaded malware or given away your passwords. Here's the scary part: your phone can't tell good codes from bad ones. That innocent square could steal your banking details, install spyware, or drain your crypto wallet. Same rule applies to QR codes! Always preview the destination. Look for HTTPS. If it's asking for passwords or downloads, run away. QR codes aren't going anywhere – they're too convenient. But convenience without caution is just an invitation to hackers. Scan smart, stay safe 
Shadow Testing image
  • Aj Wilson's profile
Shadow testing enables engineers to run testing or a dress rehearsal for your code in production, but without any risk to real users.You take a new version of a service or feature and run it in parallel with the live version. It gets the same real-world traffic or data, but its results are not shown to users and don’t affect the system, the data can even be excluded from analytics or specific reporting.Why it’s useful for testers: You can compare outputs between old and new versions It helps catch unexpected bugs or performance issues before a full rollout It’s great for testing machine learning models, API changes, or refactored logic Real world example: testing the purchase of consumer finance quotes against cars on eCommerce sites, this enables payments, checks on specified car data and interaction with third parties that will not impact financial reporting, user PII or governance standards. This could be done using a GUI launched from a comms tool or CLI, or feature flag for example.
Observability image
  • Aj Wilson's profile
Observability is about making your system understandable from the outside. It helps testers understand why something broke or failed, not just "that it broke".  Monitoring tells you something’s broken > Observability helps you figure out why.For testers, observability is a game-changer. Instead of guessing or relying on devs to dig into the code, you can use observable signals to pinpoint issues, validate assumptions, and even test in production with confidence. It means having the right tools and data like logs, metrics, and traces so you can answer questions like: What’s going wrong? Where is it happening? Why did it happen? "If I pop this data in here what happens?"
Guardrails image
  • Aj Wilson's profile
Guardrails are predefined rules, checks, or constraints that keep systems operating safely and predictably. In AI, they prevent models from producing harmful, biased, or off-topic outputs. In software testing, they ensure that code behaves within expected boundaries and doesn’t regress or break under edge cases. Think bumpers that you pop up on the side of the lanes in ten pin bowling, they don’t play the game for you, but they try to prevent things from going wrong as best they can.Why it matters:In testing Guardrails help build trustworthy, robust, and production-ready systems, whether you're deploying a model or shipping code. Guardrails validate inputs and outputs Catch regressions early Ensure compliance with specs and standards
Confabulation (in AI) image
  • Aj Wilson's profile
In AI, confabulation refers to when a model generates information that sounds plausible but is factually incorrect or entirely made up. It’s not a bug in the code (sad face), it’s a byproduct of how language models predict text based on patterns, not truth. Like an autocomplete that’s too confident, so fills in gaps with convincing but false details.Why it matters:Confabulations can mislead users, skew test results, or introduce subtle errors in automated workflows or code. Always validate critical outputs, especially in regulated domains like healthcare, finance, or law.Hallucination is when an AI generates output that is factually incorrect or nonsensical, even though it sounds plausible. Confabulation is when an AI fills in gaps in knowledge with fabricated but coherent details, often due to missing or ambiguous input. Hallucinations are more common in chatbots, summarisation, Q&As etc and confabulations are more memory-based models, storytelling, explanation etc.
Feature Flag image
  • Aj Wilson's profile
A feature flag or some call it a feature toggle is a conditional switch in your code that lets you control whether a feature is active internally or to users. They can be used for any part of the tech stack delivery including a CI/CD pipeline or test strategy.Most commonly used to: Gradually roll out features to users Test in production safely Hide unfinished work Run A/B or Multivariant testing/experiments Why it’s useful for testers (TEVS) : Test new features in isolation Enable controlled anomaly detection in Observability Validate behavior across different toggle states Simulate different user experiences without multiple builds
Quality engineers hands before and after AI Engineering started being used... image
Quality engineers before vs after AI Promps started being used,any one else feel like they are turning into authors as well as testers with the amount of typing we are having to do!
Spec Driven Development (SDD) image
  • Aj Wilson's profile
Spec Driven Development is a way of building software where you start with a clear, detailed specification of what the system should do, before any code is written. Think of it as writing the rules of the game first, so everyone knows how to play.Why Testers/QE's should care: We get a solid, agreed-upon reference to base your tests and risk storming/pre-mortems on. No more guessing what the feature is supposed to do or chasing down vague requirements. If it’s in the spec, it’s testable. “Spec” might include API definitions (like OpenAPI/Swagger, data formats and validation rules, business logic and edge cases ,user interaction flows. SDD started trending again in 2025 for a few key reasons, largely driven by the rise of AI-native development tools and a shift in how teams collaborate with intelligent systems. Most AI tools are making SDD central to how they develop now.
Model based testing image
  • Aj Wilson's profile
Model-Based Testing is a software testing technique where test cases are automatically generated from a model that describes the expected behavior of the system under test (SUT). Instead of writing test cases manually, testers can create a model, and the tool or framework generates test cases based on that model.Advantages of MBT automation - reduces manual effort in test case creation consistency - ensures tests align with system requirements coverage - helps achieve high test coverage systematically early testing - models can be tested before code is written Model-Based Testing (MBT) has strong connections to Artificial Intelligence (AI) in Model Generation, Test Case optimisation, anomaly detection (think Grafana and Datadog as live examples) and more.
Thoughts from TWiT and Meetups image
  • Aj Wilson's profile
Realised from listening to this week in testing, being in various meetups and conferences this year - a handy wee tip. Hope it helps.
Quality people. image
  • Aj Wilson's profile
  • Lewis Prescott's profile
  • James Wadley's profile
  • Geir Gulbrandsen's profile
  • Jesse Berkeley's profile
  • Nataliia Burmei's profile
Happy times gathering again at MoT London with these quality people.
Login or sign up to create your own MoT page.
Subscribe to our newsletter
We'll keep you up to date on all the testing trends.