We have all encountered frustrating scenarios where false negative results disrupt the accuracy of automation run reports. These false negatives may arise due to minor, unnotified changes in the build, application or software instability, or unexpected delays in the execution of automated steps before reaching the validation stage, among other factors.
My team and I were working on a multi-platform automation testing project where validating false negatives after test execution proved to be nearly impossible. Assigning these validations to manual testers was not a viable option, as it would defeat the fundamental purpose of automation testing.
To address this challenge, we designed and implemented a new automation architecture called the AOV Framework (Automated Observation and Validation Framework). This framework enables each test case to store its observations and detailed execution data for every step. In the event of a failure—potentially caused by a false negative—the test can automatically resume from the failed step, rather than restarting the entire suite. If the test passes successfully, the stored observation data is cleared to optimize memory usage.
Additionally, the framework allows false-negative test cases to rerun automatically after the main automation cycle is complete. This approach significantly reduces the system load during reruns, minimizing the risk of application or build instability that might have contributed to the original false negative.
Note: This framework was specifically developed for application feature testing, not for application stability testing.
My team and I were working on a multi-platform automation testing project where validating false negatives after test execution proved to be nearly impossible. Assigning these validations to manual testers was not a viable option, as it would defeat the fundamental purpose of automation testing.
To address this challenge, we designed and implemented a new automation architecture called the AOV Framework (Automated Observation and Validation Framework). This framework enables each test case to store its observations and detailed execution data for every step. In the event of a failure—potentially caused by a false negative—the test can automatically resume from the failed step, rather than restarting the entire suite. If the test passes successfully, the stored observation data is cleared to optimize memory usage.
Additionally, the framework allows false-negative test cases to rerun automatically after the main automation cycle is complete. This approach significantly reduces the system load during reruns, minimizing the risk of application or build instability that might have contributed to the original false negative.
Note: This framework was specifically developed for application feature testing, not for application stability testing.
Samarth Buch
Lead software development engineer, QA
He/Him
Experienced Lead QA Automation Engineer with 10+ years of total experience, driving innovation across large-scale mobile, web, Pixelated Streaming, VR, and connected device platforms.
Simon Tomes
Thanks for posting, Samarth. Congrats on building what sounds like a helpful framework.
Out of interest, what's the difference between application feature testing and application stability testing?
Would application stability testing include things like trying to force a crash to see how it behaves? I guess, kind of under the wide performance testing bracket.
Sign in
to comment
Better than a generic video, see YOUR test, live, ready to show you what matters most: quality at scale.
Explore MoT
Boost your career in software testing with the MoT Software Testing Essentials Certificate. Learn essential skills, from basic testing techniques to advanced risk analysis, crafted by industry experts.
Into the MoTaverse is a podcast by Ministry of Testing, hosted by Rosie Sherry, exploring the people, insights, and systems shaping quality in modern software teams.