Negative Tests are always tricky to automate - it's different than automating a positive/happy path test. Initially, while the negative test is developed often it's productive but over time as the application/testable product changes the negative tests may result in false positives or altogether irrelevant! In this talk, I like to share my experiences using real-life project examples that helped me to come up with concrete negative assertions, assertions that were relevant to the expected behavior of the application. I would share with actual and/or pseudo code examples to demonstrate the assertions. The talk would describe the lessons that I learned over time, resulting in better negative assertions.
Negative testing ensures the application behaves gracefully with invalid user input or unexpected user behavior. It improves product quality and finds it weak points. The difference between +ve and -ve is throwing an exception is not an unexpected behavior for the latter.
Negative testing is the process of applying as much creativity as possible and validating the application against invalid data. This means its intended purpose is to check if the errors are being shown to the user where it’s supposed to, or handling a bad value more gracefully. The application or software’s functional reliability can be quantified only with effectively designed negative scenarios. Negative testing not only aims to bring out any potential flaws that could cause serious impact on the consumption of the product on the whole but can be instrumental in determining the conditions under which the application can crash. Finally, it ensures that there is sufficient error validation present in the software.