Kat Obring
Founder, Director
she/her
I am Open to Write, Teach, Speak

Kat Obring has spent 20+ years in software delivery: DevOps QA engineer, Head of Delivery, and now a quality coach who helps engineering teams build measurable improvement practices. She runs Kato Coaching Ltd and has presented at PeersCon, Agile Testing Days, HUSTEF, and TestBash, among others. She is direct, opinionated, and has strong feelings about the word "quality."

Achievements

Bio Builder
TestBash Trailblazer
Career Champion
Club Explorer
MoT Community Certificate
TestBash Speaker
The Testing Planet Contributor
Glossary Contributor
Meme Maker
Photo Historian
TestBash Brighton 2025 Attendee
TestBash Brighton 2024 Attendee
TestBash Teacher
Cert Shaper
Course creator
Lead with quality
99 and Counting
Inclusive Companion
Social Connector
Open to Opportunities
Picture Perfect
Kind Click
Chapter Discovery
Call for Insights
Moment Maker

Certificates

MoT Community Certificate image
Awarded for: Achieving 5 or more Community Star badges

Activity

Kat Obring
Kat Obring
earned:
17.4.0 of MoT Software Testing Essentials Certificate image
17.4.0 of MoT Software Testing Essentials Certificate
Kat Obring
Kat Obring
earned:
6.8.0 of MoT Software Quality Engineering Certificate image
6.8.0 of MoT Software Quality Engineering Certificate
Kat Obring
Kat Obring
earned:
1.3.0 of Quality Coaching essentials image
1.3.0 of Quality Coaching essentials
Kat Obring
Kat Obring
earned:
14.2.0 of MoT Software Testing Essentials Certificate image
14.2.0 of MoT Software Testing Essentials Certificate
Kat Obring
Kat Obring
earned:
12.1.0 of MoT Software Testing Essentials Certificate image
12.1.0 of MoT Software Testing Essentials Certificate

Contributions

QED (Quality-focused Experimentation and Development) image
  • Kat Obring's profile image
A framework for improving quality through short, evidence-based cycles. It follows three steps: Question (define a problem worth solving), Evidence (design targeted metrics to measure it), and Develop (run small, time-boxed experiments to test solutions). Each cycle takes 2 to 4 weeks.
Time to restore service image
  • Kat Obring's profile image
How long it takes to recover from a failure in production once it has been detected.
Change failure rate image
  • Kat Obring's profile image
The percentage of deployments that result in a failure requiring remediation, such as a rollback, hotfix, or incident.
Lead time for changes image
  • Kat Obring's profile image
The time it takes for a code commit to reach production.
Deployment frequency image
  • Kat Obring's profile image
How often a team successfully releases to production within a given period.
SaaS (Software as a Service) image
  • Kat Obring's profile image
Software delivered over the internet on a subscription basis, hosted and maintained by the provider rather than installed locally. Examples include Slack, Jira, and Salesforce.
The high cost of stagnant pull requests: moving towards collaborative Quality image
  • Ben Dowen's profile image
  • Gary Hawkes's profile image
  • Kat Obring's profile image
Analyse the risks of stagnant pull requests and adopt a Quality Engineering mindset to reduce technical debt and accelerate value delivery through developer-led testing and faster merge cycles.
DORA metrics image
  • Kat Obring's profile image
DORA metrics are four metrics used to understand software delivery performance. They focus on flow, stability, and recovery rather than individual practice. This makes them organisationally useful, but limits how directly they can guide local quality decisions. The four DORA metrics are: Deployment frequency Shows movement, not confidence. A team can deploy often while still relying on late manual checks and unexamined risk. Lead time for changes Highlights where work is slowing down, but rarely shows why. Delays may appear in testing stages, yet closer inspection often shows that feedback is delayed because tests are hard to interpret or failures arrive too late to be useful. Change failure rate Reflects shared system behaviour. Treating it as a testing KPI creates blame rather than learning. Teams can become defensive, and the metric loses its usefulness. Time to restore service Varies widely depending on context. Where tests encode realistic scenarios and systems are observable, diagnosis is faster. When test coverage is shallow, incidents are harder to understand. How DORA metrics are used by Quality professionals Testers and Quality Engineers use DORA metrics as prompts rather than success criteria. A stable deployment frequency raises questions about where risk is being absorbed. A rising lead time invites investigation into feedback delays. A spike in change failure rate becomes a starting point for exploring escaped defects and test blind spots. DORA metrics help teams decide where to investigate, but they do not tell them what to change. They are too abstract to guide specific improvements. Teams often need something more local and more closely tied to the decisions they make during delivery.
Login or sign up to create your own MoT page.
Subscribe to our newsletter