Are you transitioning into a coding-focused role with little or no coding experience? Have you already done so? It can feel daunting, especially as AI adoption and shift-left practices increase expectations.
While I’m still cautious about relying on AI to generate tests or "heal" tests by itself, working alongside it has revealed powerful ways to improve team efficiency. Unexpectedly, it has become a valuable tool for boosting testers' capabilities and confidence as they begin their automation journey. In this article, I’ll share the approaches I explored, what worked, and what didn’t.
Overview of AI's benefits for testers learning automation
The table below gives a high-level overview of how AI can assist in developing a tester's automation skillset.
| Step | Tester's action | AI's coaching role |
| 1 | Identify test cases: write steps within prompt in plain English | AI converts steps into automation script dependent on framework and prompt |
| 2 | Review script: read AI-generated code | AI explains logic line by line |
| 3 | Modify and run: tweak script and parameters | AI suggests corrections and optimisations |
| 4 | Learn concepts: ask “Why” questions | AI provides contextual resources |
My approach
Below are the steps I have used to assist testers who have not previously written automation code. These steps give them a clear understanding of the links between manual test cases and automation scripts. And with the steps, they can work at their own pace and use examples that are specific to the work that they are focusing on. Steps 1 through 4 correlate to the table above, in relation to how the tester's learning is mapped.
In my examples, I have used Playwright, Cucumber, Typescript, and the Page Object Model (POM), but this approach works for any language and project structure. You will simply need to adjust your prompts. I have attempted to keep the examples as simple as possible; more complex cases may need further adjustment.
I used VSCode and GitHub Copilot for the AI aspect.
Step 1 - Identify test cases: write steps within prompt in plain English
The first step was bridging the gap between a manual test case and the automated test script. Cucumber handles this well, as it keeps everything in human-readable language. Testers can continue to write test cases in the format they are most comfortable with and simply ask AI to translate. I have even had success with direct input of acceptance criteria.
Simple prompt #1
"Acting as a QA automation expert, write me a scenario using the Gherkin syntax for the following test case:
- User navigates to https://qa-practice.netlify.app/auth_ecommerce
- User observes the login page
- User enters a valid username
- User enters a valid password
- User clicks 'login'
- User is logged in
- User observes the dashboard page"
Output
Feature: User Authentication for E-commerce Application
As a user
I want to log in to the e-commerce platform
So that I can access my account and dashboard
Scenario: Successful login with valid credentials
Given user navigates to https://qa-practice.netlify.app/auth_ecommerce
When user observes the login page
And user enters a valid username
And user enters a valid password
And user clicks the login button
Then user is logged in successfully
And user observes the dashboard pageWith a short explanation from AI:
This Gherkin scenario follows the Given-When-Then format and maps directly to your test steps. You can save this in a .feature file in your test project and implement the step definitions in your automation framework (Cucumber, Selenium, etc.).
This simple translation allows testers who have never worked with BDD styles to come to grips with the keywords being used and how it all fits together.
Simple prompt #2
My next prompt asked AI to generate code to run a Cucumber scenario:
Can you generate the test code to run this Cucumber scenario using Playwright and Typescript? Stick to Given, When, Then format within the test scripts.
Output
import { Given, When, Then, Before, After } from '@cucumber/cucumber';
import { expect } from '@playwright/test';
import { Page, Browser, chromium } from '@playwright/test';
let browser: Browser;
let page: Page;
const baseUrl = 'https://qa-practice.netlify.app/auth_ecommerce';
Before(async function() {
browser = await chromium.launch();
page = await browser.newPage();
});
Given('user navigates to https://qa-practice.netlify.app/auth_ecommerce', async function() {
await page.goto(baseUrl);
});
When('user observes the login page', async function() {
await page.waitForLoadState('networkidle');
const loginForm = await page.locator('#login-form');
await expect(loginForm).toBeVisible();
});
When('user enters a valid username', async function() {
const usernameField = await page.locator('#username');
await usernameField.fill('testuser@example.com');
});
When('user enters a valid password', async function() {
const passwordField = await page.locator('#password');
await passwordField.fill('ValidPassword123');
});
When('user clicks the login button', async function() {
const loginButton = await page.locator('#login-btn');
await loginButton.click();
});
Then('user is logged in successfully', async function() {
await page.waitForURL('**/dashboard', { timeout: 5000 });
const authToken = await page.evaluate(() => localStorage.getItem('authToken'));
expect(authToken).not.toBeNull();
});
Then('user observes the dashboard page', async function() {
const dashboardHeader = await page.locator('.dashboard-header');
await expect(dashboardHeader).toBeVisible();
});
After(async function() {
await browser.close();
});After the tester is familiar with translating manual test cases to automation scripts, you can give them more advanced concepts, such as adding the POM.
Simple prompt #3
Can you convert the generated Cucumber steps to use the page object model?
Output
import { Page, expect } from '@playwright/test';
export class LoginPage {
readonly page: Page;
readonly baseUrl = 'https://qa-practice.netlify.app/auth_ecommerce';
readonly usernameField = 'input[id="username"]';
readonly passwordField = 'input[id="password"]';
readonly loginButton = 'button[id="login-btn"]';
readonly loginForm = 'form';
constructor(page: Page) {
this.page = page;
}
async navigateToLoginPage() {
await this.page.goto(this.baseUrl);
}
async verifyLoginPageIsDisplayed() {
await expect(this.page).toHaveTitle(/Auth|Login/i);
const loginForm = await this.page.locator(this.loginForm).first();
await expect(loginForm).toBeVisible();
}
async enterUsername(username: string) {
await this.page.fill(this.usernameField, username);
}
async enterPassword(password: string) {
await this.page.fill(this.passwordField, password);
}
async clickLoginButton() {
await this.page.click(this.loginButton);
}
}
import { Given, When, Then, setDefaultTimeout } from '@cucumber/cucumber';
import { chromium, Browser, Page } from 'playwright';
import { LoginPage } from './pages/LoginPage';
import { DashboardPage } from './pages/DashboardPage';
setDefaultTimeout(30000);
let browser: Browser;
let page: Page;
let loginPage: LoginPage;
let dashboardPage: DashboardPage;
Given('user navigates to https://qa-practice.netlify.app/auth_ecommerce', async function() {
browser = await chromium.launch();
page = await browser.newPage();
loginPage = new LoginPage(page);
dashboardPage = new DashboardPage(page);
await loginPage.navigateToLoginPage();
});
When('user observes the login page', async function() {
await loginPage.verifyLoginPageIsDisplayed();
});
When('user enters a valid username', async function() {
await loginPage.enterUsername('testuser@example.com');
});
When('user enters a valid password', async function() {
await loginPage.enterPassword('ValidPassword123');
});
When('user clicks the login button', async function() {
await loginPage.clickLoginButton();
});
Then('user is logged in successfully', async function() {
await dashboardPage.verifyUserIsLoggedIn();
});
Then('user observes the dashboard page', async function() {
await dashboardPage.verifyDashboardPageIsDisplayed();
await browser.close();
});The steps above are generic due to my setup. However, if you had an active project, Github Copilot would look at existing structures, naming conventions, and so on. Its output would be based on what already exists.
You can give Copilot visibility over certain files or folders to use as examples. Ask it to edit these directly or just run it in ask mode.
Step 2 - Review script: read AI-generated code
The tester can now review the code generated by the AI and ask for an explanation of anything they are unfamiliar with.
For example, a prompt such as:
What does this function do?
await page.goto(baseUrl);
would return something along the lines of:
The await page.goto(baseUrl); function navigates the browser to a specified URL.
Breakdown:
page.goto() - Playwright method that instructs the browser to navigate to a given URL
baseUrl - The variable containing the target URL (https://qa-practice.netlify.app/auth_ecommerce)
await - Pauses execution until the page navigation is complete
In context: this line loads the log-in page so your test can interact with it. The browser waits for the page to load before proceeding to the next test step.
This is powerful for learners, since it breaks down the code simply. Also, if the code fails when you run it, you can copy errors from the terminal and paste them into the prompt window. AI will then give you an explanation of the error and indicate how to resolve it.
Step 3 - Modify and run: tweak script and parameters
After the tester has reviewed the code, they can modify it, so that any generic locators are correct, naming conventions are followed, structure is in line with expectations, and so on.
The tester may also want to modify the output so that steps are reusable and contain the correct actors. And they can add any additional assertions that are needed to ensure the tests pass and fail correctly.
AI can also be used during these modification stages, for example:
- A tester can copy and paste some HTML code into the AI chat window and ask which is the best locator to use for an element. This imparts knowledge of the use of stable locators and how to read HTML structure.
- Testers can ask questions like, "When the user logs on, they see a message on the dashboard. How do I assert against this?" In this manner, testers can see how what they would test manually translates into code.
Step 4 - Learn concepts: ask “Why” questions
Not only can a student of test automation ask Github Copilot why it has generated specific code, but you can also ask it to explain a file. And even experienced test automators can be confused when they try to understand code written by someone else, especially when they are not that familiar with the structure. Copilot can help with that too.
Take the following code:
async findHighestAndLowestUsingLoop() {
const numbersText = await this.numbersLocator.textContent();
const cleanedText = numbersText!.replace(/\s+/g, ' ').trim();
const numbersArray = cleanedText.split(' ').map(Number);
let smallest = numbersArray[0];
let largest = numbersArray [0];
for (let i = 1; i < numbersArray.length; i++) {
if(numbersArray[i] < smallest) smallest = numbersArray[i];
if(numbersArray[i] > largest) largest = numbersArray[i];
}
console.log('largest number is', largest)
console.log('smallest number is', smallest)
return {smallest, largest}
}You can use a prompt such as this one to help you understand what this piece of code does:
Acting as an experienced test automation expert, can you explain step by step, what the findHighestAndLowestUsingLoop function does?
In this case, it returns the following:
findHighestAndLowestUsingLoop Function - Step-by-Step Breakdown
What It Does
This function retrieves random numbers from a webpage and finds both the smallest and largest numbers using a manual loop approach.
Step-by-Step Process
Step 1: Get the numbers text
Waits for the .data element on the page and extracts all its text content
Example output: “5 10 2 8 1”
Step 2: Clean up the text
Removes extra spaces (replaces multiple spaces with single spaces)
Trims leading/trailing whitespace
Result: “5 10 2 8 1”
Step 3: Convert to number array
Splits the text by spaces into an array: ["5", "10", "2", "8", "1"]
Converts each string to a number: [5, 10, 2, 8, 1]
Step 4: Initialize smallest and largest
Sets both to the first number as a starting point
Step 5: Loop through remaining numbers
Compares each number starting from index 1
Updates smallest and largest when a new min/max is found
Step 6: Log and return results
Returns an object with both values for use in your tests
The example above is basic, but breaking it down into those steps has empowered many students of test automation. They can easily see the connection between the manual steps and the automated steps, and then they see how that fits in with various automation models and structures.
You can break it down further by using prompts like:
- “Improve my folder structure by moving all givens into a ‘preconditions’ file”
- “Create me a login helper so that I don’t need to recreate login steps”
- “What other scenarios should I add to the feature file?”
- “Can you show me how to introduce RegEx into my steps to make them more reusable?”
Students of automation can enter the specific test cases they are automating, and the output is adjusted to that. The testers can ask AI to build as much as they need it to and they can adjust as their knowledge and confidence grows.
More working examples
Example 1
Tester A had built out a suite of tests over the years in Postman. I highlighted that there would be benefits to adding certain assertions to the suite. This meant that Tester A was posed with a daunting task of updating 1000+ tests along with the overhead of learning specific assertions. Walking through the steps in this article, they learned to implement assertions globally and reduce the overhead of the big update. They now have a better understanding of asserting within AI and the most efficient way to understand those assertions.
Example 2
Tester B had limited automation knowledge because they had always used low- or no-code tools. Even so, after walking through the steps in this article, within a few weeks they had started implementing Playwright tests on their own within the sprint. Within months they were building out new automation in Playwright for each piece of new functionality being developed.
Key benefits of this approach
Keep your valuable testers in your organisation
When you use AI as a code coach, you can help ensure that experienced testers who don't yet know how to automate will pick things up quickly. They can learn at the level specific to them. This will help them feel empowered instead of feeling vulnerable to being replaced by programmers or by AI itself. And then they are likely to stay with your organisation.
Save on training costs
From a financial perspective, using AI as a coding coach reduces training costs, since it supplements or replaces formal courses.
Respond quickly to business needs
By using AI as a coding coach, testers continually improve their skills in alignment with business needs.
Potential pitfalls and how to avoid them
Over-reliance on AI
Testers can sometimes rely too heavily on AI without fully understanding of how automation or code works.
A few ways to avoid this potential pitfall are:
- Encourage testers to take ownership of scripts gradually
- Ensure all testers are engaging with pull request (PR) reviews and reviewing their peers' work. A shared Teams or Slack channel for these discussions is a great idea
- Have regular catch-ups to discuss what has been worked on, ask individuals to demonstrate their work, and explain their approaches
Resistance to change
Testers who have been working in a set way for a period of time can be resistant to change, fearing job loss. In this situation you can position AI as a supportive coach, not a replacement for them.
Test automation standards aren't always followed
Your team may have agreed-upon standards or naming conventions when it comes to writing within frameworks. However, AI may not write code using this agreed-upon approach.
Some AI tools, such as Github Copilot have access to your code base. Those tools can look at existing patterns and naming conventions and provide suitable answers based on that. However, you should still ensure that:
- Testers fully understand the current approach to test automation and the structure of the framework
- A rigorous PR review process is in place and that everyone understands it
AI can slow things down
AI can sometimes generate a number of test cases that aren't truly suitable to your needs. Then you have to spend more time reviewing and fixing what's been generated than you would have initially spent writing the code yourself. It is important to discuss these situations openly so that the whole team is aware that this can happen and learn from it.
To wrap up
Using AI as a code coach may work well for a number of reasons. Traditionally when doing peer to peer learning, subtle obstacles can stand in the way, such as:
- The peer who conducts the coaching may assume the student is more advanced than they truly are, so they do not simplify explanations adequately
- The peer who receives the coaching may not want to speak up if they do not understand the explanation, due to fear of failure
Using AI as a coding coach alongside traditional methods empowers the learner. AI can create personalised learning paths that assist the tester at a level of coding that they are comfortable with, with test scenarios specific to the individual. In addition, AI can help ease the transition from natural language to code by allowing a tester to describe a scenario in a language they are comfortable with. Then they can see how AI translates that into executable test code.
Keep in mind that the approaches I've described in this article do not eliminate the need to have regular catch-ups with your team, conduct walkthroughs of various automation frameworks, or support your team on their journey. However, if used correctly, AI can empower your student test automators, giving them the confidence to keep up with the changes that widespread adoption of AI is bringing.
For testers who are already gaining confidence in writing test scripts and code, AI can review code that the tester has written, highlight errors, and explain fixes in simple terms. Even if testers are not using AI to generate or review code, they can still use it to explain in natural language what existing code does and how it can be used.
What do YOU think?
Got comments or thoughts? Share them in the comments box below. If you like, use the ideas below as starting points for reflection and discussion.
- When you first started writing automation scripts, what surprised you the most?
- How are you using AI to assist with testing?
- How does your team support each other in improving their skills?