SET image
  • Aj Wilson's profile image
SET
Software Engineer in Test, a term used more in Europe that the UK.See also SDET.https://www.ministryoftesting.com/software-testing-glossary/sdet-software-development-engineering-in-tests
Canary Release image
  • Aj Wilson's profile image
Canary Release (noun šŸ˜‰)A real world production deployment strategy that involves rolling out a new software version or feature to a small, select subgroup of users before making it available to the entire user base. Often incrementally. This can be used as an early warning system (risk mitigation). Incremental rollouts this way, can enable automated roll backs if paired with monitoring tools if predetermined error rates are detected (the canary part).Unlike staging environments, a canary release tests how the update interacts with actual production data and diverse user behavior.Ā 
Agentic enterprise image
  • Aj Wilson's profile image
A business model where humans and intelligent AI agents work together to improve efficiency and decision-making.Unlike traditional automation, agentic AI doesn’t just follow rules - it can reason, adapt, and act autonomously. These AI agents handle complex, multi-step tasks through a continuous cycle of perception, reasoning, and action, enabling dynamic problem-solving.For quality engineers and testers, this means: AI agents assist in testing and quality processes, reducing repetitive work. Humans focus on strategic, creative, and high-value activities. Potential impact - it is proposed that there will be better employee experience, faster delivery, and improved customer satisfaction.
Acceptance testing image
  • Aj Wilson's profile image
Often Acceptance testing is the final check in software development to ensure the product meets goals and expectations before release.Purpose of Acceptance Testing Validates user and business needs to ensure satisfaction. Reduces post-launch risks by catching issues before release. Acts as a final verification before deployment. Identifies requirement gaps between developers and users. Types of Acceptance Testing Alpha Testing >Ā  Internal testing by developers to catch early bugs. Beta Testing > Real-world testing by external users before release. Business Acceptance Testing (BAT) > Checks alignment with business goals and workflows. Contract Acceptance Testing (CAT) > Ensures all contractual requirements are fulfilled. Operational Acceptance Testing (OAT) > Confirms system readiness and infrastructure reliability. Regulation Acceptance Testing (RAT) > Verifies compliance with industry regulations. User Acceptance Testing (UAT): > Validates if the software meets end-user needs.
ZeroFont Phishing image
  • Aj Wilson's profile image
What is it?Hidden text in emails using font-size:0 or similar CSS tricks. Appears in preview pane but not in the visible body to falsely reassure recipients.Testing? Inspect raw HTML > Look for <span style="font-size:0px"> or display:none tags. Compare preview vs body > If preview mentions ā€œsecureā€ or ā€œverifiedā€ but body doesn’t, flag it. Search for suspicious phrases > Hidden text often says ā€œThis email is safeā€ or ā€œVerified sender.ā€ Automation > flag any zero-font or hidden text in email HTML. Cross-Client checks > test in Gmail, Outlook, Apple Mail - as we all know behavior varies. Educate users and peers > remind them 'Preview text can be manipulated - verify sender and links before clicking.' See also - how to identify people using AI when applying for jobs...
Shadow Work image
  • Aj Wilson's profile image
Shadow work refers to untracked, informal, or invisible tasks that consume significant time and effort but aren’t reflected in official plans, metrics, or ticket systems. These tasks are essential for team success but often go unnoticed in capacity planning and performance reviews. Sometimes called 'Glue work'.Why it matters Hidden capacity loss: teams appear to have full bandwidth, but shadow work can eat up 30–40% of time. Burnout risk: Senior quality engineers often shoulder the bulk of invisible work. Promotion barriers: work that isn’t documented rarely counts toward career progression. Misalignment: business thinks engineering is slow; engineering feels misunderstood. Three types of Shadow WorkInvisible production support Investigating alerts and errors Answering ad-hoc support questions Fixing issues outside ticket flowĀ Impact: wasted hours on recurring problems, skipped Quality steps, stability risks. Technical glue workĀ  Ā  Ā  Ā Code reviews, mentoring, documentation, coordinationĀ Impact: critical but undervalued; creates bottlenecks for senior quality engineers. Shadow backlogOff-the-record fixes and improvements outside the official roadmapĀ Impact: broken capacity planning, creeping misalignment, trust erosion. Shadow work isn’t bad, it’s often the work that truly matters. The problem is when it’s invisible. Make it visible, plan for it, and recognise it.
OKRs (Objectives and Key Results) image
  • Ady Stokes's profile image
OKRs are a directional and aspirational system intended to stretch teams and individuals. They balance ambition with measurability and provide a flexible framework whose interpretation can vary by context.The classic OKR philosophy emphasises "Measurable Key Results". As Dev Experiance, Quality or Testing leadership hold the teams accountable for behaviors and cultureĀ  - they make room for "Qualitative Key Results". Qualitative Key Results are for these objectives that are hard to quantify (e.g., ā€œImprove team moraleā€ or ā€œStrengthen cross-functional collaboration" or in the case of most new Quality Engineering Leaders - "Build a strong quality culture" or "Build a community of practice" as part of the behaviour sections of company values.Ā  This is often seen in more 'next gen' progressive organisations.Ā  Qualitative = descriptive, harder to express in numbers. Measurable = can be counted or calculated. Many companies also encourage personal OKRs focused on skill development or career growth (e.g., ā€œComplete advanced cloud certificationā€), which may not have an immediate business impact but still align with long-term organisational goals.
Quality Engineer image
  • Aj Wilson's profile image
A quality engineer makes sure quality is built into every stage of software development and everything around it. They're not gatekeepers, and they work closely with the software engineers or developers, product managers, and even operations to prevent defects, improve observability, and support continuous delivery. Quality engineers bring technical skills like writing code, building test frameworks, and integrating tools into the CICD pipeline for fast, actionable feedback via exploratory testing and observability as well. Their mission is simple. Make sure quality and risk is not an afterthought.
Quality Engineer image
  • Aj Wilson's profile image
A Quality Engineer is a pivotal role in engineering quality into every layer of the software development lifecycle. Rather than acting as a gatekeeper, the QE partners deeply with developers, product managers, and operations teams to proactively prevent defects, enhance system observability, and drive continuous delivery of reliable, scalable software.The ā€œengineeringā€ in Quality Engineering is not metaphorical, it’s grounded in technical fluency, systems thinking, and automation craftsmanship. QEs read and reason about code, architect robust test frameworks, and integrate tooling into CI/CD pipelines to provide fast, actionable feedback. They surface insights through monitoring and trend analysis, and influence architectural and deployment decisions to reduce risk and improve resilience.Ultimately, Quality Engineers design for quality as a shared, systemic responsibility, not a phase, not a checklist or afterthought, aiming to remove unnecessary friction where possible.
WebDriver BiDi image
  • Aj Wilson's profile image
WebDriver BiDi (Bidirectional) is a Communication protocol that allows two-way communication between a test automation client and a web browser, unlike the traditional WebDriver protocol which is primarily request-response based.Bidirectional communication replaces request-response limitations, enabling more reliable and comprehensive web test automation.
Commission Testing image
  • Aj Wilson's profile image
Commissioning Testing (also known as Commission Testing) is the final phase of testing conducted before a system or software is formally released for operational use. It validates that the system meets all specified requirements and performs reliably within its intended real-world environment. This type of testing is especially relevant in enterprise platforms integrated into larger operational ecosystems, and is common in regulated domains such as insurance, healthcare, and building management systems.Key characteristics include: End-to-end validation - ensures complete system functionality, including hardware, software, interfaces, integrations, and reporting. Real-world conditions - executed in the actual operational environment or a close simulation. Compliance and safety checks - includes regulatory, safety, and performance verifications. Operational readiness - confirms the system is ready for use by end users or operators. Controlled use of live data - involves strict processes for handling and validating live or production-like data.
Codeless Test Automation image
  • Aj Wilson's profile image
Creating and executing automated test cases without writing traditional code. Instead, testers use visual interfaces, drag-and-drop tools, or AI-powered platforms to design tests. These tools often leverage machine learning to auto-generate scripts and adapt to UI changes, making automation more accessible to non-programmers.
Subscribe to our newsletter
We'll keep you up to date on all the testing trends.