Software Testing Live: Episode 06 - Don't automate everything, review everything

02 Apr 2026
  • Locked
  • Ben Dowen's profile
  • Emily O'Connor's profile
  • Software Testing Live's profile
Software Testing Live: Episode 06 - Don't automate everything, review everything thumbnail
A free account is required to view this content
Talk Description

In this episode of Software Testing Live, Ben Dowen is joined by Principal Quality Engineer Emily O’Connor for a live session exploring the balance between automation and human review in modern software testing.

As AI-powered coding and test generation tools become more widespread, the session focuses on why testers must review everything, even when it’s generated automatically, and avoid the trap of automating everything by default.

Ben and Emily explore a ticketing application in real time, combining exploratory testing with AI-generated test planning and writing using Playwright MCP. Along the way, they uncover usability issues, accessibility concerns, and gaps in functionality, while also demonstrating how AI can quickly produce overly complex or misaligned test plans if left unchecked.

The session highlights the importance of critical thinking, context awareness, and intentional test design, showing that while AI can accelerate test creation, it often requires refinement to ensure tests are meaningful, maintainable, and aligned with business value.

You’ll learn how to:

  • Balance exploratory testing and automation to focus on what really matters to users and the business
  • Decide what should and should not be automated, avoiding low-value or overly granular tests
  • Critically review AI-generated test plans and identify unnecessary complexity or incorrect assumptions
  • Apply good test design principles (Arrange–Act–Assert, small focused tests, clear intent)
  • Recognise common pitfalls of AI-generated automation, including over-generation and hidden assumptions
  • Evaluate usability, accessibility, and missing features during exploratory testing
  • Ensure automated tests remain maintainable, readable, and valuable within a broader test strategy
  • Understand why human review is more important than ever in AI-assisted development workflows
Ben Dowen
Lead Quality Engineer
He/Him

I explore the world of software development and testing. Join me as I continue my adventures, discovering the wonders of quality.

Chapter Lead
Ambassador
Emily O'Connor
Principal Quality Engineer
She/Her

Technical leader with a sixth sense for bugs. Avid learner, passionate about translating "dev-speak" to enable teams adopt automation and AI-accelerated quality engineering. I believe great software starts with user-focused problem solving, and automation should surface the bugs that PMs actually care about fixing.

Ben Dowen
Lead Quality Engineer
He/Him

I explore the world of software development and testing. Join me as I continue my adventures, discovering the wonders of quality.

Chapter Lead
Ambassador
Emily O'Connor
Principal Quality Engineer
She/Her

Technical leader with a sixth sense for bugs. Avid learner, passionate about translating "dev-speak" to enable teams adopt automation and AI-accelerated quality engineering. I believe great software starts with user-focused problem solving, and automation should surface the bugs that PMs actually care about fixing.

Gary Hawkes
Really enjoyed the session, thank you both so much. I was an automated tester many years ago using the commercial tools. A combination of moving into a management roles and automation tech moving on meant I kind of lost my confidence getting back into it. This has at least shed a light that with AI, I could get myself back in the automation game.

Sign in to comment
Software Testing Live
More Talks
Ship with confidence: Agentic AI-Driven Quality with Rovo Dev and Xray

1h 3m 27s

Leading quality in a large team MoT Cincinnati

0h 57m 15s

A tester’s guide to AI guardrails

1h 5m 3s

Subscribe to our newsletter