How to write automation that represents issues PMs would care about fixing

26 Mar 2026

This document is provided in the commentary
AGENTS.md
Titles must describe the expected user behaviour, written from the user's perspective:

test('As a user, I can create a new project', ...)
test('As a user, I can delete an existing client', ...)
test('As a user, I cannot submit the form without a required field', ...)

One Assertion Per Test Each test asserts one behaviour. If you find yourself writing multiple expect() calls that each test a different thing, split them into separate tests. Custom Expect Messages All assertions must include a descriptive failure message as the first argument:
expect(response.status(), 'Expected 201 Created when creating a new project').toBe(201);
expect(heading, 'Page heading should confirm project was saved').toBeVisible();

No Conditionals Tests must not branch. No if, switch, or ternary expressions in test bodies. If setup differs between cases, use separate tests or test.beforeEach. No Magic Variables All values must be named. Use constants, faker, or values from the .env config:
// Good const testProjectName = faker.commerce.productName(); await projectPage.fillName(testProjectName); // Bad await page.fill('input', 'abc123');
Emily O'Connor
Principal Quality Engineer
She/Her

Technical leader with a sixth sense for bugs. Avid learner, passionate about translating "dev-speak" to enable teams adopt automation and AI-accelerated quality engineering. I believe great software starts with user-focused problem solving, and automation should surface the bugs that PMs actually care about fixing.

Sign in to comment
Explore MoT
AI-driven testing in practice: from requirements to reliable automation image
See where AI genuinely helps, where it doesn’t, and how testers can stay firmly in control
MoT Advanced Certificate in Test Automation image
Ascend to leadership roles by mastering strategic skills in automation strategy creation, planning and execution
Into The Motaverse image
Into the MoTaverse is a podcast by Ministry of Testing, hosted by Rosie Sherry, exploring the people, insights, and systems shaping quality in modern software teams.
Subscribe to our newsletter