Daria Tsion
Head of QA
she/her
I am Open to Work, Write, Teach, Speak, Mentor, Internship, CV Reviews, Podcasting, Meet at MoTaCon 2026, Review Conference Proposals

Head of QA with 11+ years of experience in building QA processes, leading teams, and driving test automation. Passionate about AI in QA, process optimization, and leadership in quality.

Achievements

Career Champion
Club Explorer
Bio Builder
MoT Streak
In the Loop
Collection Curator
Glossary Contributor
Photo Historian
Author Debut
99 and Counting
Inclusive Companion
Social Connector
Open to Opportunities
Picture Perfect
Kind Click
Supportive Clicker
Goal Setter
Moment Maker

Certificates

404 Certificate Not Found
Level up your software testing and quality engineering skills with the credibility of a Ministry of Testing certification.

Activity

My Collection #1 image
My Collection #1
Daria Tsion
Daria Tsion
achieved:
Collection Curator image
This badge is awarded to members who create their first collection, organising learning content for themselves or others.
Daria Tsion
Daria Tsion
contributed:
Leadership in action image
New collection
Daria Tsion
Daria Tsion
achieved:
Career Champion image
This badge is awarded to members who have subscribed as Professional Members.
Test surface image
Test surface

Contributions

Test surface image
  • Daria Tsion's profile image
The test surface in feature testing represents the total, combined area of all public methods, parameters, and application programming interfaces (APIs) of a component that must be validated to ensure it works correctly. It defines the scope of testing required, where a larger, more complex surface area necessitates more in-depth testing. It includes all possible variations introduced by factors such as feature flags, environment differences (e.g. dev, staging, production), user segments, and rollout strategies.Understanding and managing the test surface is important for effective test planning, as it helps teams identify what needs to be tested, avoid gaps in coverage, and reduce the risk of issues caused by untested combinations of conditions
Quality narratives and the circles of consequence - Ep 121 image
  • Cassandra H. Leung's profile image
  • Simon Tomes's profile image
  • Judy Mosley's profile image
  • Ujjwal Kumar Singh's profile image
  • Demi Van Malcot's profile image
  • Heleen Van Grootven's profile image
  • Daria Tsion's profile image
Set meaningful goals, communicate quality through risks and real-world consequences, and turn small wins like building a quality narrative into career growth.
My 2026 Goals ✨ image
  • Daria Tsion's profile image
The end of 2025 was emotionally one of the hardest periods for me since 2022. Living in a world full of political changes while war is happening in your own country is not easy. Blackouts, massive ...
Five practical ways to use AI as a partner in Quality Engineering image
  • Daria Tsion's profile image
Use these structured prompting techniques to improve the quality and usefulness of AI output in testing workflows
Prompt chaining image
  • Daria Tsion's profile image
Prompt chaining is a technique where the output of one prompt is used as the input for the next prompt in a sequence. This allows complex tasks to be broken into smaller, more manageable steps, enabling deeper analysis, comparison, and refinement across large or complex problem spaces.
Self-critique prompting  image
  • Daria Tsion's profile image
Self-critique prompting describes the practice of asking AI to review its own output against specific criteria, such as readability, coverage, or standards, and then improve the result based on those findings. This mirrors human review processes and helps identify gaps, inconsistencies, or improvement opportunities before the output is used in production.
Iterative prompting image
  • Daria Tsion's profile image
Iterative prompting is an approach where prompts are refined over multiple steps based on previous outputs. Instead of expecting a perfect result from a single prompt, the user reviews, adjusts constraints, and asks follow-up questions to gradually improve accuracy, quality, and relevance.
 Role-based prompting image
  • Daria Tsion's profile image
Role-based prompting is a prompting technique where the user explicitly defines the role or perspective the AI should take before generating a response. By assigning a role such as a tester, automation engineer, or reviewer, the AI output becomes more focused, context-aware, and aligned with real-world responsibilities and expectations.
Test log image
  • Rosie Sherry's profile image
Test Log is a record of information generated during test execution. It typically includes details such as test steps, timestamps, system responses, errors, and execution status, and is used to support debugging, failure analysis, and understanding test behaviour.
Login or sign up to create your own MoT page.
Subscribe to our newsletter
We'll keep you up to date on all the testing trends.