Emily O'Connor
Principal Quality Engineer
She/Her
Technical leader with a sixth sense for bugs. Avid learner, passionate about translating "dev-speak" to enable teams adopt automation and AI-accelerated quality engineering. I believe great software starts with user-focused problem solving, and automation should surface the bugs that PMs actually care about fixing.
Achievements
Certificates
Awarded for:
Passing the exam with a score of 95%
Awarded for:
Achieving 5 or more Community Star badges
Activity
earned:
Magic Values
contributed:
Definitions of Magic Values
earned:
Stub
contributed:
Definitions of Stub
earned:
13.4.0 of MoT Software Testing Essentials Certificate
Contributions
Magic values are values hard-coded directly in your tests without any code extra comment or context. These values make your code less readable and harder to maintain. Magic strings can cause confusion to the reader of your tests. If a string looks out of the ordinary, they might wonder why a certain value was chosen for a parameter or return value. This type of string value might lead them to take a closer look at the implementation details, rather than focus on the test.
 A stub is a controllable replacement for an existing dependency (or collaborator) in the system. By using a stub, you can test your code without dealing with the dependency directly.
Master the RICCE framework to transition from manual testing to AI-augmented automation by leveraging structured prompting and the Playwright MCP agent ecosystem.
Evaluate the shift from "automate all" to "review all" by using AI agents for test plans while applying human-led ACE feedback to ensure code quality and business relevance.
Apply the "5 Whys" technique to both technical debugging and organisational workflows to uncover deep-seated root causes and implement sustainable continuous improvements.
Don't let AI automate away your thinking, ensure you use the tool to create tests that represent features PMs care about fixing
What is meant by monitoring?By textbook definition, monitoring is the process of collecting, analyzing, and using information to track a program’s progress toward reaching its objectives and to guide management decisions. Monitoring focuses on watching specific metrics. Logging provides additional data but is typically viewed in isolation of a broader system context.What is the difference between observability and monitoring?Monitoring is capturing and displaying data, whereas observability can discern system health by analyzing its inputs and outputs. For example, we can actively watch a single metric for changes that indicate a problem — this is monitoring. A system is observable if it emits useful data about its internal state, which is crucial for determining the root cause.Monitoring typically provides a limited view of system data focused on individual metrics. This approach is sufficient when systems failure modes are well understood. Because monitoring tends to focus on key indicators such as utilization rates and throughput, monitoring indicates overall system performance. For example, when monitoring a database, you’ll want to know about any latency when writing data to a disk or average query response time. Experienced database administrators learn to spot patterns that can lead to common problems. Examples include a spike in memory utilization, a decrease in cache hit ratio, or an increase in CPU utilization. These issues may indicate a poorly written query that needs to be terminated and investigated.Conventional database performance analysis is simple compared to diagnosing microservice architectures with multiple components and an array of dependencies. Monitoring is helpful when we understand how systems fail, but as applications become more complex, so do their failure modes. It is often not possible to predict how distributed applications will fail. By making a system observable, you can understand the internal state of the system and from that, you can determine what is not working correctly and why.
Time to restore service, also known as time to recovery measures the duration required to recover from an incident. It reflects the time taken to restore normal operations. This metric starts when an outage begins and ends when the system is fully operational again, capturing the total recovery duration.Understanding Mean Time to Recovery helps teams identify areas for improvement in their recovery processes and work towards minimizing downtime.
AGENTS.mdTitles must describe the expected user behaviour, written from the user's perspective:
test('As a user, I can create a new project', ...)
test('As a user, I can delete an existing client',...
Writing an agents.md that outlines Playwright standards isn’t just documentation for its own sake—it directly improves how reliably and efficiently your automated agents (or test authors) behave.
...