By Mark Winteringham
A regular question I see pop up on the API Testing channel of Ministry of Testing Slack (and one that Iâve been asked on occasion) is what to do about automating API checks for ânegativeâ scenarios. With technology like HTTP, you have the ability to quickly create combinations of requests rapidly and this can sometimes contribute to an overwhelming feeling of just what to automate and how on an API layer.
To help establish a solution, letâs first dig deeper into what is typically meant by a ânegativeâ scenario. Say, for example, we have a Web API that has a /validate endpoint that an email address is sent to, to determine if itâs a valid email address. The raw HTTP request looks something like this:
POST /validate HTTP/1.1
Host: localhost:8080
Content-Type: application/json
{
   "email": "test@example.com"
}
Weâve decided to check that the endpoint behaves correctly by creating some automation that sends this request and asserts that a positive response is returned, creating something similar to this Supertest example:
request('http://localhost:8080')
.post('/validate')
.set('Content-Type', 'application/json')
.send({
"email": "test@example.com"
})
.expect(200)
Once this automation is established, we might start thinking about negative scenarios around incorrect email formats. The problem with this though is there are a lot of ways in which an email can be incorrectly formatted, in ways such as incorrect domain, invalid characters, characters added in the wrong place, etc. This brings us back to our initial problems of knowing what to automate and how.
Patterns of automating negative API tests
The temptation initially is to start duplicating our initial API check, creating new versions of it that check different combinations of invalid emails. For example:
request('http://localhost:8080')
.post('/validate')
.set('Content-Type', 'application/json')
.send({
"email": "test@examplecom"
})
.expect(400)
Notice how the check looks almost identical with the exception of the data added to the request payload and the assertion. As mentioned earlier, HTTP offers a double-edged sword when it comes to creating API automation. Itâs easy to keep the same model of a request and quickly update the values within it to create new checks.
However, this presents a rabbit hole of potential permutations that we can fall down. If we kept creating new automated checks for each scenario, we would end up with a lot of code to maintain. This is why a lot of people ask the question of whether they should automate each permutation.
Sometimes suggestions are made that the answer is to improve the design of our automation so that it could become data-driven or use random data generator tools such as Faker. But whilst that can reduce the amount of duplicate code, it doesnât reduce the number of automated checks being performed. In fact, it might encourage us to add more, as it becomes so cheap to add them. This would result in extended execution times for our automation, more results to process and doesnât answer the question of whether we actually should automate every permutation or not.
Let risk guide you
Just because we can automate something, doesnât mean we should. In automation in testing one of our principles is:
Focusing on risk over coverage
By this we mean that our motivation for automating a specific event or scenario is driven by risk, rather than attempting to attain a coverage metric such as test cases automated, code paths covered, combinations considered, etc. When we build automated checks, we want them to support us in our daily efforts which means being deliberate and targeted with our automation. This means identifying potential risks, then choosing which to automate rather than feeling obligated to automate every permutation because we can.
The key to determining whether you want to automate something or to determine if existing automated checks are valuable is to ask ourselves the question:
What risk will I be mitigating?
Or
What risks are these checks mitigating?
If weâre unable to answer these questions, or if the risk we identify is a risk already covered or not one weâre concerned about, then perhaps what we want to automate isnât going to add any additional value to what we have created already, and that we should spend our time automating something else.
Ultimately how you implement your automation is secondary to the choices you make. Successful automation requires us to ask the question, âWhy are we automating this?â If we have a valid reason then we can use data-driven techniques or similar to create a whole range of automated API checks with different permutations, which provide value. But if the why is missing, then the answer to the question of âShould you create automation for each negative API scenario?â should be a âNoâ, if it isnât going to deliver any value.