GraphQL image
  • Maithilee Chunduri's profile
GraphQL is a query language for APIs.GraphQL is an open-source data query and manipulation language for APIs, and a runtime for fulfilling queries with existing data. Unlike most query languages (such as SQL), you don’t use GraphQL to query a particular type of data store (such as a MySQL database). Instead, you use GraphQL to query data from any number of different sources. Developed GraphQL was developed internally by Facebook in 2012 before being publicly released in 2015 Advantages of GraphQL:  Apps using graphQL are fast and stable because GraphQL query return exactly what you need, Not More Not Less.  REST APIs require loading from multiple URLs, GraphQL APIs get all the data your app needs in a single request.  It allows developer to add new fields without impacting existing queries Why it is important for a QA to know about GraphQL? As a QA we should be aware of new tools and technologies which are in high demand. GraphQL is quickly making its place in the industry. Several big companies like Facebook, GitHub, Airbnb, Atlassian, Paypal, Plural sight are listed as GraphQL customers on their Site.What can we test in GraphQL Schemas Resolvers Performance tests  Static type checking  Unit tests Integration tests  End to End tests Load testing
Quality Assurance image
  • Ady Stokes's profile
Quality Assurance is a traditional term you will frequently hear in software development when someone is referring to software testing. It originally came from the world of manufacturing, where it referred to a systematic set of actions designed to ensure that a product, such as a car or a physical item, meets certain quality standards or requirements, like 'a 6mm gap'. Unlike software, which often has requirements that are not defined to such a specific degree. When software development needed a label for its quality activities, they borrowed 'QA', and it stuck for a long time.In many companies, you will find that the software testing team is still called the 'QA team', and that is where the confusion lies. The term implies that the act of testing can somehow 'assure' quality, but that is simply not true. As testers, our job is to find and expose risks and bugs, not to guarantee perfection. You cannot 'test in' quality at the end of a project. Quality must be built in from the very first planning meeting, involving everyone from analysts to developers.Because of this inherent conflict between the word 'assurance' and the reality of finding risk, many modern teams are moving towards more accurate job titles such as Software Tester or Quality Engineer. This better reflects the actual work we do, which is about engineering quality into the process and thinking about product quality from the start, rather than making impossible promises of assuring quality.
BOSCARD image
  • Emily O'Connor's profile
The BOSCARD mnemonic is a tool that can be used to help understand the context around the problem or project. It also serves as the minimum context needed for a one page test strategy.BOSCARD stands for; Background – reason for the project, key stakeholders etc. Objectives – What are you trying to achieve? Good objectives are SMART (Specific, Measurable, Achievable, Realistic and Time-bound). Scope – ensure you define the scope (including what is out of scope). Constraints – consider any constraints or restrictions that limit or place conditions on the project. Assumptions – consider all factors that, for planning purposes, are considered to be true before starting. Risks – outline any risks identified and a quick assessment of each. Deliverables – define what is going to be tangibly produced.
Branching Strategy image
  • Emily O'Connor's profile
A “branching strategy” refers to the strategy that a software development team employs when writing, merging and shipping code in the context of a version control system like Git. Teams adopt a branching strategy because it enable parallel development and keeps the main codebase stable.With branching, each code author works locally in a separate branch focused on a specific task. Branches are published and merged following peer code review. There are two main types of branches; persistent and ephemeral. Persistent branches are long-lived, holding stable, shared code that reflect the state of the project at key stages.  Ephemeral branches are short-lived and focus on specific development tasks. Engineers create them from a persistent branch and delete them after merging.  Example An example branching strategy is GitFlow which supports structured, multi-stage software development using a pre-defined set of persistent and ephemeral branches. There are three persistent branches. The main branch contains production-ready code. Teams tag it for releases (e.g., v2.0.1) and often configure CD pipelines to deploy it automatically. The develop branch acts as an integration branch. Developers merge completed feature branches into develop for staging and testing. A release/* branch stages code for production release. Teams fork a release/* branch from develop to stabilize a version before release. Only bug fixes, documentation updates, and final QA changes are allowed in a release branch. There are also ephemeral branches. A feature/* branch isolates work for a new feature, enhancement, or experiment. A developer branches a feature/* branch from develop, works on it independently, and merges changes back after review and testing. They then delete the branch. A hotfix/* branch is an emergency fix to main to address critical issues in production. Developers create such a branch main, fix the issues, and merge the changes into both main (to deploy) and into develop (to sync), and deploy immediately. They delete the branch after merging. Gitflow Workflow Create a feature/* branch from develop. Work on the feature. Merge feature branch into develop. When ready to release, create a release/* branch from develop. Finalize the release in the release/* branch. Merge the release into both main and develop. Tag the release on main for versioning. Testers should seek to understand the branching strategy adopted in their teams, to understand changes in the test environment and how code is shipped to end users.
OKRs (Objectives and Key Results) image
  • Ady Stokes's profile
OKRs stand for Objectives and Key Results. It is both a management framework and an individual tool used for setting goals and tracking the outcomes across an organisation, a team or for an individual. In the simplest terms, the Objective is what you want to achieve, and the Key Results are how you measure the progress and the level of success of that Objective.Without making the 'Key Results' part quantifiable, they become almost useless. As an example. An Objective that says, "Make the product more profitable," is just too broad. It is noise that cannot be proven to have been achieved without a specific focus, such as 'by increasing users'. Then adding a Key Result that says, "Increase daily active users from 500 to >800 by [date]" is something you can measure and track. Just like a good requirement, an OKR must not be ambiguous. It needs to combine the Objective with clear Key Results, similar to a requirements acceptance criteria, to confirm that it has actually been met. If the criteria are vague, you will never know whether you've succeeded. Basically, an O, without good KRs, is practically pointless.It is also important to understand the different levels of OKRs. Team goals or team OKRs are more focused on delivering shared value for the product or the business. An individual's OKRs would generally show how they directly contribute to the broader team objectives. They are slightly different from personal goals, which are more generally focused on individual growth and development, and not necessarily tied to business outcomes.
Post-Incident Analysis image
  • Ady Stokes's profile
Post-incident analysis, or post-mortem, looks at what happened after something has gone wrong, mainly in production, but sometimes in testing. It could be due to an outage, a bug in production, configuration issues, or any number of bugs or incidents. It should not be about blame or finger-pointing. It’s about understanding, learning, and adapting. When a team or, to start, an individual conducts a post-incident session, they gather the facts of what occurred, when, and how the issue was discovered, and then explore why it happened. They focus not just on the surface error but on the deeper root causes behind it. Good post-incident work digs past symptoms to find the root cause. It is better done as a team effort, and the best sessions are open, honest, and psychologically safe so everyone can share insights freely. The outcome isn’t just a report. It should be an action plan with practical steps to reduce the chance of recurrence and to strengthen either the software, the processes or both. That might include improving monitoring, changing code review practices, refining tests, or adjusting how incidents are triaged and communicated. Done well, post-incident analysis turns failure into fuel for improvement. It’s a core part of Continuous Quality to use what went wrong and make what comes next better.
Test Bench image
  • Ady Stokes's profile
A test bench is a controlled setup used to check how software or hardware behaves without needing the full system it will eventually run on. It provides an environment where components can be tested, monitored, and adjusted safely before being integrated into the complete product.In the automotive industry, for example, a test bench might replicate the hardware parts of a car and the communication between them. Engineers can connect the engine control unit, sensors, and actuators to simulate real driving conditions. This allows the software to be tested for performance, timing, and safety before it ever reaches a real vehicle.A test bench helps isolate issues early and ensures that each part works correctly in a realistic but repeatable environment. It is an essential step between isolated component testing and full system testing, giving confidence that everything will connect and perform as intended when it moves into the real world. Credit to Andres Gomez Ruiz for introducing this to me.
Self Greening image
  • Rahul Parwal's profile
Self greening is a term used to describe a situation where AI automatically “fixes” or adjusts tests so that they pass, potentially hiding genuine problems that should have caused them to fail. It’s a side effect of AI-driven test maintenance or “self-healing” systems that focus on achieving green (passing) test results, sometimes at the expense of meaningful accuracy or visibility into real issues.Here’s what it looks like: You run an automated accessibility scan. The AI finds two issues. You tell it to keep fixing until tests pass. It changes the test to expect “2 issues found.” The test passes. Now, if a later version adds two new issues, the AI updates the expected count to 4, and the test still passes. You get a clean dashboard.  But the product is breaking quietly. Self-greening gives you a false sense of stability.  Everything looks green, but the tests are no longer testing anything useful.While AI can be guided or configured to restrict healing to safe areas (like element identifiers or path changes), out-of-the-box implementations often risk self-greening, trading genuine test insight for the illusion of stability.You can reduce self-greening risks by: Limiting where AI can apply self-healing (for example, only to locators or paths). Reviewing AI-made changes before merging them. Tracking the difference between AI-fixed and human-reviewed test results. Treating “all-green” reports with healthy skepticism.
Quality image
  • Ady Stokes's profile
Quality, or in our case, 'software quality', is one of those words that everyone uses, is rarely questioned in the moment, but can mean so many different things.  Suman Bala, gives a great example of a Throne and a school chair. Their costs are exponentially different. Their materials, complexity and design are miles apart. Yet, for their purpose, in their context. Their quality can be argued as comparable. The soft and plush throne, with its many adornments, used only a few times, can be perfect for the context in which it is used. The hard and basic metal and plastic chair. Used tens of thousands of times, it can also be perfect in a school environment where it needs to be cheap, hard-wearing, and even stackable. Vastly different, yet equally valuable.In software, quality can depend on such a variety of factors that having a single definition can be more unhelpful than not. A developer might see quality as clean code with no 'smells'. A tester might see it as a multitude of things, from alignment with requirements or acceptance criteria to its accessibility and usability. A user might simply want something that works.Gerald Weinberg described quality as “value to someone”, which has been augmented over time with things like, 'who matters' and, 'at some time'.  Even with context, quality is not always a fixed measure but something shaped by people, context, and purpose. What matters to one group may not matter to another, and that is why software quality always needs a conversation.Stuart Crocker, in a LinkedIn post in 2024, said. "This is now my goto definition of what software testing is, "The exploration and discovery of intended and unintended behaviours in the software we build and their impact on product value—for both customers and the business." With my definition of quality being "The absence of unnecessary friction" These two definitions, working together, help me make quick, useful decisions on what to test and from that testing, what is necessary to improve." That view captures how feelings can be a primary indicator of quality (to someone).Dan Ashby talks about quality in his 2019 blog post on Philip Crosby's 4 absolutes of quality in a software context. He takes those and suggests his own adaptations, the first of which is, "Quality is defined as “correctness” and “goodness” in relation to stakeholder value. This is adapted from "Quality is defined as conformance to requirements", and as Crosby was a quality leader in manufacturing, conformance to requirements is fundamental. For software, we need more.In the end, quality in software is not something I can tell you. It is context-dependent, shaped by many things like purpose, constraints, time, and the people involved. There is no universal checklist. The key is understanding what quality means in your situation, how it will be measured, and who it truly matters to. Some are helped by customer feedback, some by monitoring and observation of software performance. Others look to quantify in various ways. They ask, 'How long does it take to learn and understand a new feature?' Or determine, 'What scales or meters can we employ to measure a 'value'.However you describe or measure software quality will be dependent on the factors you have. Have fun doing it.
Inclusive Design image
  • Rabi'a Brown's profile
Inclusive design is a philosophy with this principle at its heart: if one user or group of users cannot use an application because something about it makes it inaccessible or difficult to use, it is not accessible or usable, period. So we who are members of development teams should always seek to design applications that everyone can use easily. Accessibility standards and usability guidelines are an important part of inclusive design, but inclusive design requires a broader outlook. We should not rely solely on a given set of standards or practices as the be-all and end-all of making our applications easy to use. We should always go further and anticipate how our applications might fall short of being usable by some people. How do we do that? A key guideline in inclusive design is to reflect, BEFORE we design or code, on how solving a usability issue for one group of people can make our applications easier to use for everyone. For example, ensuring that there is adequate color and saturation contrast between a screen background and text in the foreground is an important standard in accessibility for people with low vision. But it also helps those who may have no visual disability but who are trying to read a screen in bright sunlight. Note that “can use an application” means not just being able to read text on a screen and find navigation buttons, for example. It also includes getting intended tasks done in a reasonable amount of time and being satisfied or pleased with the experience. There’s no sense in designing an application that allows most people to complete their tasks when the application bothers them so much with notifications that they eventually abandon it for another.
Quality Engineer image
  • Aj Wilson's profile
A Quality Engineer is a pivotal role in engineering quality into every layer of the software development lifecycle. Rather than acting as a gatekeeper, the QE partners deeply with developers, product managers, and operations teams to proactively prevent defects, enhance system observability, and drive continuous delivery of reliable, scalable software.The “engineering” in Quality Engineering is not metaphorical, it’s grounded in technical fluency, systems thinking, and automation craftsmanship. QEs read and reason about code, architect robust test frameworks, and integrate tooling into CI/CD pipelines to provide fast, actionable feedback. They surface insights through monitoring and trend analysis, and influence architectural and deployment decisions to reduce risk and improve resilience.Ultimately, Quality Engineers design for quality as a shared, systemic responsibility, not a phase, not a checklist or afterthought, aiming to remove unnecessary friction where possible.
IDEA-T image
  • Emily O'Connor's profile
Influencing theDesignEvaluation, andAcquisition ofTools to support testingThe idea-t framework contains 12 heuristic questions and supporting information which are intended to provoke thought and ideas when designing or choosing a test tool.
Subscribe to our newsletter
We'll keep you up to date on all the testing trends.