Daria Tsion
Head of QA / QA Manager
she/her
I am Open to Work, Write, Teach, Speak, Mentor, Internship, Podcasting
QA Manager with 11+ years of experience in building QA processes, leading teams, and driving test automation. Passionate about AI in QA, process optimization, and leadership in quality.
Achievements
Certificates
Level up your software testing and quality engineering skills with the credibility of a Ministry of Testing certification.
Activity
earned:
Brag Board Template (How to Make Your Work – and Achievements – More Visible)
earned:
10 Years in Testing: Then and Now
earned:
Five practical ways to use AI as a partner in Quality Engineering
earned:
My 2026 Goals ✨
earned:
AI Testing Isn’t Replacing Deterministic Tests — It’s Replacing Manual Eyeballing
Contributions
Set meaningful goals, communicate quality through risks and real-world consequences, and turn small wins like building a quality narrative into career growth.
The end of 2025 was emotionally one of the hardest periods for me since 2022. Living in a world full of political changes while war is happening in your own country is not easy. Blackouts, massive ...
Use these structured prompting techniques to improve the quality and usefulness of AI output in testing workflows
Prompt chaining is a technique where the output of one prompt is used as the input for the next prompt in a sequence. This allows complex tasks to be broken into smaller, more manageable steps, enabling deeper analysis, comparison, and refinement across large or complex problem spaces.
Self-critique prompting describes the practice of asking AI to review its own output against specific criteria, such as readability, coverage, or standards, and then improve the result based on those findings. This mirrors human review processes and helps identify gaps, inconsistencies, or improvement opportunities before the output is used in production.
Iterative prompting is an approach where prompts are refined over multiple steps based on previous outputs. Instead of expecting a perfect result from a single prompt, the user reviews, adjusts constraints, and asks follow-up questions to gradually improve accuracy, quality, and relevance.
Role-based prompting is a prompting technique where the user explicitly defines the role or perspective the AI should take before generating a response. By assigning a role such as a tester, automation engineer, or reviewer, the AI output becomes more focused, context-aware, and aligned with real-world responsibilities and expectations.
Test Log is a record of information generated during test execution. It typically includes details such as test steps, timestamps, system responses, errors, and execution status, and is used to support debugging, failure analysis, and understanding test behaviour.