Daria Tsion
Head of QA
she/her
I am Open to Work, Write, Teach, Speak, Mentor, Internship, CV Reviews, Podcasting, Meet at MoTaCon 2026, Review Conference Proposals
Head of QA with 11+ years of experience in building QA processes, leading teams, and driving test automation. Passionate about AI in QA, process optimization, and leadership in quality.
Achievements
Certificates
Level up your software testing and quality engineering skills with the credibility of a Ministry of Testing certification.
Activity
earned:
My Collection #1
achieved:
This badge is awarded to members who create their first collection, organising learning content for themselves or others.
contributed:
New collection
achieved:
This badge is awarded to members who have subscribed as Professional Members.
earned:
Test surface
Contributions
New collection
The test surface in feature testing represents the total, combined area of all public methods, parameters, and application programming interfaces (APIs) of a component that must be validated to ensure it works correctly. It defines the scope of testing required, where a larger, more complex surface area necessitates more in-depth testing. It includes all possible variations introduced by factors such as feature flags, environment differences (e.g. dev, staging, production), user segments, and rollout strategies.Understanding and managing the test surface is important for effective test planning, as it helps teams identify what needs to be tested, avoid gaps in coverage, and reduce the risk of issues caused by untested combinations of conditions
Set meaningful goals, communicate quality through risks and real-world consequences, and turn small wins like building a quality narrative into career growth.
The end of 2025 was emotionally one of the hardest periods for me since 2022. Living in a world full of political changes while war is happening in your own country is not easy. Blackouts, massive ...
Use these structured prompting techniques to improve the quality and usefulness of AI output in testing workflows
Prompt chaining is a technique where the output of one prompt is used as the input for the next prompt in a sequence. This allows complex tasks to be broken into smaller, more manageable steps, enabling deeper analysis, comparison, and refinement across large or complex problem spaces.
Self-critique prompting describes the practice of asking AI to review its own output against specific criteria, such as readability, coverage, or standards, and then improve the result based on those findings. This mirrors human review processes and helps identify gaps, inconsistencies, or improvement opportunities before the output is used in production.
Iterative prompting is an approach where prompts are refined over multiple steps based on previous outputs. Instead of expecting a perfect result from a single prompt, the user reviews, adjusts constraints, and asks follow-up questions to gradually improve accuracy, quality, and relevance.
Role-based prompting is a prompting technique where the user explicitly defines the role or perspective the AI should take before generating a response. By assigning a role such as a tester, automation engineer, or reviewer, the AI output becomes more focused, context-aware, and aligned with real-world responsibilities and expectations.
Test Log is a record of information generated during test execution. It typically includes details such as test steps, timestamps, system responses, errors, and execution status, and is used to support debugging, failure analysis, and understanding test behaviour.