We have all encountered frustrating scenarios where false negative results disrupt the accuracy of automation run reports. These false negatives may arise due to minor, unnotified changes in the build, application or software instability, or unexpected delays in the execution of automated steps before reaching the validation stage, among other factors.
My team and I were working on a multi-platform automation testing project where validating false negatives after test execution proved to be nearly impossible. Assigning these validations to manual testers was not a viable option, as it would defeat the fundamental purpose of automation testing.
To address this challenge, we designed and implemented a new automation architecture called the AOV Framework (Automated Observation and Validation Framework). This framework enables each test case to store its observations and detailed execution data for every step. In the event of a failure—potentially caused by a false negative—the test can automatically resume from the failed step, rather than restarting the entire suite. If the test passes successfully, the stored observation data is cleared to optimize memory usage.
Additionally, the framework allows false-negative test cases to rerun automatically after the main automation cycle is complete. This approach significantly reduces the system load during reruns, minimizing the risk of application or build instability that might have contributed to the original false negative.
Note: This framework was specifically developed for application feature testing, not for application stability testing.