Reading:
Should You Create Automation For Each Negative API Scenario?
Share:

Should You Create Automation For Each Negative API Scenario?

Learn whether we should automate all our negative API scenarios in this article from Mark Winteringham on the Testing Planet

By Mark Winteringham

A regular question I see pop up on the API Testing channel of Ministry of Testing Slack (and one that I’ve been asked on occasion) is what to do about automating API checks for ‘negative’ scenarios. With technology like HTTP, you have the ability to quickly create combinations of requests rapidly and this can sometimes contribute to an overwhelming feeling of just what to automate and how on an API layer.

To help establish a solution, let’s first dig deeper into what is typically meant by a ‘negative’ scenario. Say, for example, we have a Web API that has a /validate endpoint that an email address is sent to, to determine if it’s a valid email address. The raw HTTP request looks something like this:


POST /validate HTTP/1.1
Host: localhost:8080
Content-Type: application/json

{

    "email": "test@example.com"

}

We’ve decided to check that the endpoint behaves correctly by creating some automation that sends this request and asserts that a positive response is returned, creating something similar to this Supertest example:

request('http://localhost:8080')
      .post('/validate')
      .set('Content-Type', 'application/json')
      .send({
          "email": "test@example.com"
      })
      .expect(200)

Once this automation is established, we might start thinking about negative scenarios around incorrect email formats. The problem with this though is there are a lot of ways in which an email can be incorrectly formatted, in ways such as incorrect domain, invalid characters, characters added in the wrong place, etc. This brings us back to our initial problems of knowing what to automate and how.

Patterns of automating negative API tests

The temptation initially is to start duplicating our initial API check, creating new versions of it that check different combinations of invalid emails. For example:

request('http://localhost:8080')
      .post('/validate')
      .set('Content-Type', 'application/json')
      .send({
          "email": "test@examplecom"
      })
      .expect(400)

Notice how the check looks almost identical with the exception of the data added to the request payload and the assertion. As mentioned earlier, HTTP offers a double-edged sword when it comes to creating API automation. It’s easy to keep the same model of a request and quickly update the values within it to create new checks.

However, this presents a rabbit hole of potential permutations that we can fall down. If we kept creating new automated checks for each scenario, we would end up with a lot of code to maintain. This is why a lot of people ask the question of whether they should automate each permutation.

Sometimes suggestions are made that the answer is to improve the design of our automation so that it could become data-driven or use random data generator tools such as Faker. But whilst that can reduce the amount of duplicate code, it doesn’t reduce the number of automated checks being performed. In fact, it might encourage us to add more, as it becomes so cheap to add them. This would result in extended execution times for our automation, more results to process and doesn’t answer the question of whether we actually should automate every permutation or not.

Let risk guide you

Just because we can automate something, doesn’t mean we should. In automation in testing one of our principles is:

Focusing on risk over coverage

By this we mean that our motivation for automating a specific event or scenario is driven by risk, rather than attempting to attain a coverage metric such as test cases automated, code paths covered, combinations considered, etc. When we build automated checks, we want them to support us in our daily efforts which means being deliberate and targeted with our automation. This means identifying potential risks, then choosing which to automate rather than feeling obligated to automate every permutation because we can.

The key to determining whether you want to automate something or to determine if existing automated checks are valuable is to ask ourselves the question:

What risk will I be mitigating?

Or

What risks are these checks mitigating?

If we’re unable to answer these questions, or if the risk we identify is a risk already covered or not one we’re concerned about, then perhaps what we want to automate isn’t going to add any additional value to what we have created already, and that we should spend our time automating something else.

Ultimately how you implement your automation is secondary to the choices you make. Successful automation requires us to ask the question, “Why are we automating this?” If we have a valid reason then we can use data-driven techniques or similar to create a whole range of automated API checks with different permutations, which provide value. But if the why is missing, then the answer to the question of “Should you create automation for each negative API scenario?” should be a “No”, if it isn’t going to deliver any value.

Mark Winteringham's profile
Mark Winteringham

Tester, Toolsmith, Author and Instructor

Mark Winteringham is a tester, toolsmith and author of AI-Assisted Testing and Testing Web APIs, with over ten years of experience providing testing expertise on award-winning projects across a wide range of technology sectors, including BBC, Barclays, UK Government and Thomson Reuters. He is an advocate for modern risk-based testing practices and trains teams in Automation, Behaviour Driven Development and Exploratory testing techniques. He is also the co-founder of Ministry of Testing Essentials a community raising awareness of careers in testing and improving testing education. You can find him on Twitter @2bittester or at mwtestconsultancy.co.uk



How Can You Contribute to Automation If You Don't Know How to Write Code?
Can Mobile Accessibility Testing be Automated via AI? with Mesmer
Test The Norm: Your Weekly Testing News - Issue 459
The Power of Mocking APIs - Shivani Gaba
ReTestBash UK 2022: Live Q&A with Julia Pottinger
One Tab To Rule Them All: The Developer Tools Network Tab
Testing Smarter, Not Harder with DesignWise
Accessibility Testing Crash Course - James Sheasby Thomas
Helping Your Coworkers Get The Testing Picture
Selenium 4 introduces relative locators. This new feature allows the user to locate an object in relation to another object on the screen! Don't wait, get an instant demo today.
Explore MoT
Episode One: The Companion
A free monthly virtual software testing community gathering
MoT Advanced Certificate in Test Automation
Ascend to leadership roles by mastering strategic skills in automation strategy creation, planning and execution