Avoiding The Million Dollar Question: How Did The QA Team Miss This Defect?

Avoiding The Million Dollar Question: How Did The QA Team Miss This Defect?

We've all been asked the question, but how can we improve things so it isn't asked again?

One of the primary goals of any good QA team is to make sure that no defect evades detection long enough to get into production. Finding defects at the right time, which ideally is early during the software development life cycle, will save time and cost and make sure your organisation’s customers are happy with the product. 

Sometimes defects make it to production. When this happens, the most common question you hear is ‘How did QA miss this defect?’ and it can be tough for the test team to defend their work.

As we all know, testing cannot guarantee that any product is 100% defect free. That’s why every mobile app or OS has different versions, each of which includes the fixes for the defects which were missed in testing. Of course, it is not only members of the QA team who are responsible for defects. Every single individual in the organisation is accountable for the quality of his or her own deliverables. 

This article discusses the possible reasons why defects are missed during the testing phase or pre-production phase. And it describes the mitigations which the  QA team can adopt so that a quality product is always delivered.

Key Reasons For Defect Misses

Below are some of the key reasons why defects make it into production:

  • Customer scenarios are not covered
  • Requirements are not clear or are otherwise inadequate
  • Test impact for the bugs is not updated or incorrect
  • Agreeing to the defects marked  ‘As Designed’
  • Accepting code changes after QA sign-off
  • Test automation is not robust
  • Incorrect defect severity and priority
  • Alpha and beta testing are not performed

Reasons For Defect Misses And Preventative Measures

The table below gives an overview of some of the primary reasons for defect misses and how to avoid them.

Reason For Defect Miss

Preventative Measure

Not incorporating the customer flows or scenarios in test suite Make sure that the test scenarios are created based on the customer flows and also based on defects logged by customers for previous releases
Alpha and beta testing are not performed These two testing types elicit, important customer feedback and simulate real time user behaviour
Requirements are not clear or are inadequately precise Requirements need to be gathered and reviewed thoroughly. Defects need to be logged in case of any requirements discrepancy or lack of clarity
Test impact for defects is not logged or updated by dev team Test impact statements from the dev team  help in identifying possible areas of regression so that QA team can focus more precisely and increase the coverage in those areas
Test automation is not robust Build a robust automation framework and automate as many scenarios as possible
Incorrect triaging of defects (when critical defects are moved to future release or backlog, for example) Triage team should analyse the impact of each defect and triage it accordingly
Not updating the leadership team on critical defects Any defect with high severity and priority need to be communicated to the client and release management team immediately
Not reporting customer defects found in the previous release in the correct defect logging tool A single defect management tool should be used to log defects, no matter which member of the team logs them or whether the defect is logged by a customer
Last minute changes occurred after QA sign-off No changes after code freeze and QA sign-off should take place
No proper requirements traceability matrix A requirements traceability matrix should be maintained to help ensure complete test coverage
Agreeing to the defects marked ‘As Designed’, without valid proof All the defects marked ‘As Designed’ by the developer need to have valid documentation attached to that defect as a future reference
Pre production environment differs somehow from production environment Pre production environment should be exact replica of the production environment
Defect was assigned an incorrect priority or severity Dedicated SPOC (Single Point of Contact)  should  review the defects at regular intervals

Not enough time for testing


QA team should be given enough time to test all the features as per the test estimations provided
Handling ad hoc requests during test execution Allocate time for each task based on priority without over-committing
Defect was never reviewed for priority or assessment of fix time and effort All the active defects need to be reviewed and to be fixed as per the release criteria
Exploratory testing / Buddy testing is not performed Perform buddy testing after any critical defects is fixed to identify any regressions

Dependencies on third party solutions break the product somehow


Any third party technology used in the product should be evaluated for risk and its use within your organisation.
No access to view the defects logged by customers for previous releases All team members should have access to defects logged by the customers, so that the team can understand exactly how to reproduce the defect


Frequent Problems With Defects Logged By Customers

  1. Testing is performed only on older version of a browser, an OS, or mobile devices
  2. The issue occurs only intermittently and root cause analysis cannot be performed
  3. Defect is specific to a particular customer and cannot be reproduced in test environments
  4. ‘Steps to Reproduce’ are not clear, or the defect description is empty and does not include  any attachments or screenshots


In this article, we have discussed the different reasons why defects can be missed by the QA team. We’ve also mentioned some preventative measures you can take so that defects are caught early during the life cycle. That being said: quality is everyone’s responsibility, and customer satisfaction should be our mantra.

For Further Information

Venkat Ramesh Atigadda's profile
Venkat Ramesh Atigadda

QA Lead

Venkat Ramesh Atigadda works as a QA Lead at Tata Consultancy Services Limited and also performed the role as a Solution Developer while working for Assurance Center of Excellence (CoE). He has worked for different industry domains including healthcare, energy, and insurance, and has published various testing articles for Testing Experience and Testing Circus magazines.

Explore MoT
Test Exchange
A skills and knowledge exchange to enhance your testing, QA and quality engineering life
Bug Reporting 101
A quick course on raising bugs
This Week in Testing
Debrief the week in Testing via a community radio show hosted by Simon Tomes and members of the community