Emily O'Connor
Principal Quality Engineer
She/Her

Technical leader with a sixth sense for bugs. Avid learner, passionate about translating "dev-speak" to enable teams adopt automation and AI-accelerated quality engineering. I believe great software starts with user-focused problem solving, and automation should surface the bugs that PMs actually care about fixing.

Achievements

TestBash Trailblazer
Bio Builder
Career Champion
Avid Reader
Club Explorer
Article Maven
MoT Community Certificate
MoT Software Testing Essentials Certificate
Scholarship Hero
99 Second Speaker
MoT Streak
In the Loop
404 Talk (Not) Found
Bug Finder
Collection Curator
Glossary Contributor
Meme Maker
Photo Historian
TestBash Brighton 2025 Attendee
TestBash Brighton 2024 Attendee
Cert Shaper
 Testing Out Loud
Author Debut
A tester's role in continuous quality
Cognitive biases in software testing
Introduction to Cypress
99 and Counting
Chapter Event Speaker
Pride Supporter
Inclusive Companion
Social Connector
Open to Opportunities
Found at 404
Picture Perfect
Kind Click
Supportive Clicker
Goal Setter
Insights Taster
Chapter Discovery
Call for Insights
Moment Maker
Moment Sharer

Certificates

MoT Software Testing Essentials Certificate image
Awarded for: Passing the exam with a score of 95%
MoT Community Certificate image
Awarded for: Achieving 5 or more Community Star badges

Activity

Magic Values image
Magic Values
Emily O'Connor
Emily O'Connor
contributed:
Magic Values image
Definitions of Magic Values
Emily O'Connor
Emily O'Connor
contributed:
Stub image
Definitions of Stub
13.4.0 of MoT Software Testing Essentials Certificate image
13.4.0 of MoT Software Testing Essentials Certificate

Contributions

Magic Values image
  • Emily O'Connor's profile image
Magic values are values hard-coded directly in your tests without any code extra comment or context. These values make your code less readable and harder to maintain. Magic strings can cause confusion to the reader of your tests. If a string looks out of the ordinary, they might wonder why a certain value was chosen for a parameter or return value. This type of string value might lead them to take a closer look at the implementation details, rather than focus on the test.
Stub image
  • Emily O'Connor's profile image
 A stub is a controllable replacement for an existing dependency (or collaborator) in the system. By using a stub, you can test your code without dealing with the dependency directly.
From manual to automated: a tester’s journey into AI image
  • Emily O'Connor's profile image
  • Jonathan Cole's profile image
  • Shawn Vernier's profile image
Master the RICCE framework to transition from manual testing to AI-augmented automation by leveraging structured prompting and the Playwright MCP agent ecosystem.
Software Testing Live: Episode 06 - Don't automate everything, review everything image
  • Ben Dowen's profile image
  • Emily O'Connor's profile image
Evaluate the shift from "automate all" to "review all" by using AI agents for test plans while applying human-led ACE feedback to ensure code quality and business relevance.
Two ways to use the “5 whys” method: Root cause of bugs and identifying continuous improvements image
  • Richard Adams's profile image
  • Emily O'Connor's profile image
Apply the "5 Whys" technique to both technical debugging and organisational workflows to uncover deep-seated root causes and implement sustainable continuous improvements.
Don't automate everything; review everything image
  • Emily O'Connor's profile image
Don't let AI automate away your thinking, ensure you use the tool to create tests that represent features PMs care about fixing
Monitoring image
  • Emily O'Connor's profile image
What is meant by monitoring?By textbook definition, monitoring is the process of collecting, analyzing, and using information to track a program’s progress toward reaching its objectives and to guide management decisions. Monitoring focuses on watching specific metrics. Logging provides additional data but is typically viewed in isolation of a broader system context.What is the difference between observability and monitoring?Monitoring is capturing and displaying data, whereas observability can discern system health by analyzing its inputs and outputs. For example, we can actively watch a single metric for changes that indicate a problem — this is monitoring. A system is observable if it emits useful data about its internal state, which is crucial for determining the root cause.Monitoring typically provides a limited view of system data focused on individual metrics. This approach is sufficient when systems failure modes are well understood. Because monitoring tends to focus on key indicators such as utilization rates and throughput, monitoring indicates overall system performance. For example, when monitoring a database, you’ll want to know about any latency when writing data to a disk or average query response time. Experienced database administrators learn to spot patterns that can lead to common problems. Examples include a spike in memory utilization, a decrease in cache hit ratio, or an increase in CPU utilization. These issues may indicate a poorly written query that needs to be terminated and investigated.Conventional database performance analysis is simple compared to diagnosing microservice architectures with multiple components and an array of dependencies. Monitoring is helpful when we understand how systems fail, but as applications become more complex, so do their failure modes. It is often not possible to predict how distributed applications will fail. By making a system observable, you can understand the internal state of the system and from that, you can determine what is not working correctly and why.
Time to restore service image
  • Kat Obring's profile image
Time to restore service, also known as time to recovery measures the duration required to recover from an incident. It reflects the time taken to restore normal operations. This metric starts when an outage begins and ends when the system is fully operational again, capturing the total recovery duration.Understanding Mean Time to Recovery helps teams identify areas for improvement in their recovery processes and work towards minimizing downtime.
How to write automation that represents issues PMs would care about fixing image
AGENTS.mdTitles must describe the expected user behaviour, written from the user's perspective: test('As a user, I can create a new project', ...) test('As a user, I can delete an existing client',...
Write an agents.md file that improves AI-created test output and can be used for review before PR image
Writing an agents.md that outlines Playwright standards isn’t just documentation for its own sake—it directly improves how reliably and efficiently your automated agents (or test authors) behave. ...
Login or sign up to create your own MoT page.
Subscribe to our newsletter