Kat Obring
Founder, Director
she/her
I am Open to Write, Teach, Speak
Kat Obring has spent 20+ years in software delivery: DevOps QA engineer, Head of Delivery, and now a quality coach who helps engineering teams build measurable improvement practices. She runs Kato Coaching Ltd and has presented at PeersCon, Agile Testing Days, HUSTEF, and TestBash, among others. She is direct, opinionated, and has strong feelings about the word "quality."
Achievements
Certificates
Awarded for:
Achieving 5 or more Community Star badges
Activity
earned:
17.4.0 of MoT Software Testing Essentials Certificate
earned:
6.8.0 of MoT Software Quality Engineering Certificate
earned:
1.3.0 of Quality Coaching essentials
earned:
14.2.0 of MoT Software Testing Essentials Certificate
earned:
12.1.0 of MoT Software Testing Essentials Certificate
Contributions
A framework for improving quality through short, evidence-based cycles. It follows three steps: Question (define a problem worth solving), Evidence (design targeted metrics to measure it), and Develop (run small, time-boxed experiments to test solutions). Each cycle takes 2 to 4 weeks.
How long it takes to recover from a failure in production once it has been detected.
The percentage of deployments that result in a failure requiring remediation, such as a rollback, hotfix, or incident.
The time it takes for a code commit to reach production.
How often a team successfully releases to production within a given period.
Software delivered over the internet on a subscription basis, hosted and maintained by the provider rather than installed locally. Examples include Slack, Jira, and Salesforce.
Analyse the risks of stagnant pull requests and adopt a Quality Engineering mindset to reduce technical debt and accelerate value delivery through developer-led testing and faster merge cycles.
DORA metrics are four metrics used to understand software delivery performance. They focus on flow, stability, and recovery rather than individual practice. This makes them organisationally useful, but limits how directly they can guide local quality decisions.
The four DORA metrics are:
Deployment frequency Shows movement, not confidence. A team can deploy often while still relying on late manual checks and unexamined risk.
Lead time for changes Highlights where work is slowing down, but rarely shows why. Delays may appear in testing stages, yet closer inspection often shows that feedback is delayed because tests are hard to interpret or failures arrive too late to be useful.
Change failure rate Reflects shared system behaviour. Treating it as a testing KPI creates blame rather than learning. Teams can become defensive, and the metric loses its usefulness.
Time to restore service Varies widely depending on context. Where tests encode realistic scenarios and systems are observable, diagnosis is faster. When test coverage is shallow, incidents are harder to understand.
How DORA metrics are used by Quality professionals Testers and Quality Engineers use DORA metrics as prompts rather than success criteria. A stable deployment frequency raises questions about where risk is being absorbed. A rising lead time invites investigation into feedback delays. A spike in change failure rate becomes a starting point for exploring escaped defects and test blind spots.
DORA metrics help teams decide where to investigate, but they do not tell them what to change. They are too abstract to guide specific improvements. Teams often need something more local and more closely tied to the decisions they make during delivery.