Kat Obring
Founder, Director
she/her
I am Open to Write, Teach, Speak
Kat Obring has spent 20+ years in software delivery: DevOps QA engineer, Head of Delivery, and now a quality coach who helps engineering teams build measurable improvement practices. She runs Kato Coaching Ltd and has presented at PeersCon, Agile Testing Days, HUSTEF, and TestBash, among others. She is direct, opinionated, and has strong feelings about the word "quality."
Achievements
Certificates
Awarded for:
Achieving 5 or more Community Star badges
Activity
earned:
6.6.0 of MoT Software Testing Essentials Certificate
earned:
5.4.0 of MoT Software Testing Essentials Certificate
earned:
5.3.0 of MoT Software Testing Essentials Certificate
earned:
3.4.0 of MoT Software Testing Essentials Certificate
earned:
Cosmic conversation: the possibilities of Quality Coaching
Contributions
DORA metrics are four metrics used to understand software delivery performance. They focus on flow, stability, and recovery rather than individual practice. This makes them organisationally useful, but limits how directly they can guide local quality decisions.
The four DORA metrics are:
Deployment frequency Shows movement, not confidence. A team can deploy often while still relying on late manual checks and unexamined risk.
Lead time for changes Highlights where work is slowing down, but rarely shows why. Delays may appear in testing stages, yet closer inspection often shows that feedback is delayed because tests are hard to interpret or failures arrive too late to be useful.
Change failure rate Reflects shared system behaviour. Treating it as a testing KPI creates blame rather than learning. Teams can become defensive, and the metric loses its usefulness.
Time to restore service Varies widely depending on context. Where tests encode realistic scenarios and systems are observable, diagnosis is faster. When test coverage is shallow, incidents are harder to understand.
How DORA metrics are used by Quality professionals Testers and Quality Engineers use DORA metrics as prompts rather than success criteria. A stable deployment frequency raises questions about where risk is being absorbed. A rising lead time invites investigation into feedback delays. A spike in change failure rate becomes a starting point for exploring escaped defects and test blind spots.
DORA metrics help teams decide where to investigate, but they do not tell them what to change. They are too abstract to guide specific improvements. Teams often need something more local and more closely tied to the decisions they make during delivery.
I've just reviewed Module 3, Lesson 3 of the Software Quality Engineering Certificate (SQEC).
...
A family of delivery approaches where work moves through clearly defined phases in a fixed order, such as requirements, design, development, testing, and release. Each phase must be completed before the next one begins, and changes late in the process are costly because they require revisiting earlier stages. These models assume stability of requirements and low variability in the delivery process.
Most conversations about AI in testing are still stuck on the same question:
“Will AI replace testers?”
Meanwhile, the real risks are hiding in plain sight.
Vendors rarely talk about the dee...
Develop the mindset and practical coaching techniques that help teams build shared responsibility for continuous quality