Simon Tomes
Community Lead at MoTaverse
he/him
I am Open to Write, Teach, Speak, Mentor, CV Reviews, Podcasting, Meet at MoTaCon 2026, Review Conference Proposals
Hello, I'm Simon. Since 2003 I've had various roles in testing, tech leadership and coaching. I believe in the power of collaboration, creativity and community. 🎓 MoT-STEC qualified.
Achievements
Certificates
Awarded for:
Passing the exam with a score of 100%
Activity
awarded Cameron Browne for:
We're a match because we both wore the same MoTaverse caps during a Call for Insights recording! 🤩
thanked contributors on:
 Context Engineering comes with paradoxes and hard choices. This small byte talks about them and why it matters.
earned:
Context Engineering - What & Why?
awarded AI Chapter for:
Context Engineering - What & Why?
awarded Rahul Parwal for:
Context Engineering - What & Why?
Interests
advocacy
all-things-community
asking-questions
bug
bugs
bugs-in-the-wild
charters
coaching
coaching-testers
community
conferences
conference-speaking
continuous-learning
design
diversity
drumming
events
experimentation
exploratory-testing
heuristics
leadership
memes
mentoring
mot-stec
music
oracles
podcasting
process-improvement
product-development
product-management
public-speaking
quality-strategy
space-duck
space-seagull
systems-thinking
team-enablement
testbash
testbash-brighton
testing
writing
Contributions
How do we set up AI agents that are secure, reliable and trustworthy? Start with some principles.
If you didn't write it, do you own it? The role of AI Guardrails in code quality.
Need inspiration on what to write about in the MoTaverse? A growing collection of prompts and examples to get you started.
A pessimist, optimist and realist continue their previous discussion about writing. Last time around they were talking about writing about something they're enjoying.Pessimist: I'm much more comfor...
A hot-takes episode where the crew debates AI skepticism, manual testing’s staying power, whether AI agents really speed delivery, and why quality ultimately lives (or dies) with leadership decisions.
In classic software, you write a function, you write a test, and you know if it passes or fails. AI, especially LLMs and agents, don’t play by those rules. Their outputs are probabilistic, context-sensitive, and non-deterministic. The same prompt can yield different answers, and “correctness” is often nuanced and qualitative, not quantitative in nature.AI evals are structured, repeatable processes for measuring the quality, reliability, and safety of your AI applications. Evals are your compass. They help you navigate the messy, shifting landscape of real-world scenarios for your agents, ambiguous requirements, and evolving user needs. They’re not about chasing a single “accuracy” number, they’re about asking, “Is this system doing what we need, for our users, in our context?”
Great great from Simon Tomes on Quality Assistance model moment - https://www.ministryoftesting.com/moments/ama-about-transitioning-to-quality-assistance-modelI will start from a story. At my previ...
A pessimist, optimist and realist are talking about writing.Optimist: I love writing. It's the best way to help me articulate my thoughts and ideas. And there's so much to write about given all the...
https://www.ministryoftesting.com/insights/the-soft-skills-shift-everything/Check out the recent call for insights; it was my first for the Ministry of Testing.This is something that I'm pursuing ...
Are we trading soft skills for AI fluency?