After eight and a half years in software development and testing, I’ve seen firsthand how AI has moved from a curiosity to a part of our daily work. I’ve helped drive that adoption myself. But along the way, I started noticing something. While Artificial Intelligence (AI) can accelerate delivery and improve efficiency, it also raises uncomfortable questions about how much thinking we delegate, and what that means for the future of Quality Engineering.
Nowadays, the roles seem to focus on having automation and developer skills rather than critical thinking and business knowledge. It feels redundant. It feels like it is more valuable to have development skills than critical thinking and business knowledge. It isn’t. Although our world is changing and evolving towards an AI era, we should never forget that empathy and human critical thinking are what make the difference. A product developed to meet users' actual needs and deliver value to them, and one developed to meet the company's wants or values, can be two very different things.Â
The products and services developed aim to serve the majority of humans, and AI is great for speeding up repetitive tasks and rephrasing or improving written content, but the human touch should always be there. Your personality shouldn’t be lost. If you do, we all sound the same. I will not be Cristina anymore, but some AI in general.
In IT development, my experience shows that the problem starts with using AI to generate documentation. Then, accepting everything it suggests as the single point of truth, and not using our critical thinking. We should investigate whether the AI's suggestion for a given topic is accurate, then view it through the lens of our own experience.
The real cost of trusting AI without checksÂ
In my daily work, I usually use AI to polish and improve the documentation I’ve already written. But during a particularly exhausting period of multitasking, I slipped into a different habit. Instead of thinking first and using AI as support, I started asking it to generate documentation from scratch.
I was exhausted from constant context-switching and deadlines, and that’s when I slipped into over-trusting the tool. Until then, AI had been a real ally helping me refine my English, structure documents, and communicate more clearly. But in that moment, I crossed a line. Instead of using AI to support my thoughts, I asked it to replace them. I wrote the following prompt. “Create a Quality Strategy for a telco company based on X technology.”
The output looked polished, structured, and confident. I shared it for review with my Head of Quality Engineer and my Quality Manager, and the cracks showed immediately from their questions.
- “We don’t do that here.”
- “What does this mean in our context?”
- “This doesn’t reflect how our teams work.”
They were right.
I had accepted AI’s response as the single source of truth instead of treating it as a draft to be used as a basis for continuing to work on it. That moment was a wake-up call. The comments were true. I had just accepted what AI had returned to me. I needed a different approach. To fix this, I applied something I already knew from product discovery and personal goal setting, 5W2H.Â
What is 5W2H?
This method is a management tool developed around 1950 or 60, and it was applied at Toyota, the well-known car brand now associated with electric cars. It is an approach that is easy to implement and helps clarify and organise specific actions to achieve an objective or goal.
The 5W2H acronym represents seven fundamental questions, as shown in the image below. Five words beginning with the letter W, and two questions beginning with the letter H. Its purpose is to provide a structured and simple approach to problem solving, project management or action planning. In my case, as a guideline to evaluate AI output.Â
Why 5W2H?
I found this helpful because AI is good at producing answers but weak at understanding context, ownership, constraints, and real-world trade-offs, unless we explicitly provide them. The 5W2H forces those questions back onto the table in a structured way. Let’s have a look at how the approach was done before and after.
Before and after the AI output is put through 5W2HÂ
Before (AI-generated, unchallenged)
The prompt: “Create a Quality Strategy for a telco company based on X technology.”
The output contained generic testing principles, based on what the Machine Learning (ML) has received from other sources. It differs significantly from the company and the product to be developed, as it is unaware of them. Its suggestions, lacking the right information, were overly tool-heavy, and ones we didn’t have. They were more closely aligned with the prompts provided by previous users than mine. As I had failed to cover ownership, it was undefined.Â
There wasn’t a timeline or any cost awareness. The AI just suggested things to get done in a fast and easy way, without considering the effort and time that was required. While it sounded “right”, it wasn’t true. English is my second language, and it sounded polished and well-structured at first, but it didn't align with the realities of my company and teams.
Result: It was, in a lot of ways, technically correct, but at the same time, practically useless.
Looking at the framework, the questions to be asked, and how I should proceed, the prompts changed, as did the output.
After (AI + 5W2H applied)
Â
Here’s how I used 5W2H to reshape the same content. The questions that I needed to answer before using AI.
| W or H | Question | Goal |
| What | What is the real goal? | Document how testing is actually done in the company, not how it “should” be done in theory. |
| Where | Where will this live and evolve? | Confluence, initially restricted, is shared only after validation. |
| When | What’s realistic? | One week for a first draft, five working days for feedback, and clear follow-ups. |
| Who | Who owns what? | I, as author and reviewer, Head of Quality, and Quality Manager, as validators. |
| Why | Why does this matter? | To create shared understanding and reduce fragmentation across Quality Engineering teams. |
| How | How do we make it real? | Interviews with experienced Quality Engineers, review of scattered existing documentation, and adaptation of AI suggestions, not copying them. |
| How much | What’s the real cost? | 1 hour per day of focused work, plus review effort from senior stakeholders |
Â
Only after answering these questions did AI become useful again. The revised prompt was:Â
”Structure a Quality Strategy document that reflects how testing is actually performed in a Telco company that is organised by Product and Integration teams, using different test management tools, such as X-ray and Assertthat, and automation using Java, Python, and Playwright. The goal is to create an internal Confluence page to align teams, clarify responsibilities, and document current practices. Please suggest a structure and wording based on level 4. Measured from TMMI Professional content, based on the content that I will share….(added the information already collected)”.
This time, the output was better, and from there, I could use the AI as:
- A wording assistant: AI helps me structure feedback and gives me several options according to the response, such as being clearer or more professional, and can adjust the tone if required
- A structure refiner: AI helps me refine the content to be more aligned with the receiver and the intention of the message
- A clarity booster: AI can summarise complex content, improve readability, and reshape disorganised thoughts into a clear and structured message.
AI was no longer the decision-maker. I created the content based on input from my peers and line managers, and then used AI to rephrase and tidy up.Â
During the process above, I made the mistake of using the shortcut, AI, where I asked the question, “Based on company x, which works with technology x,y,z, provide a testing strategy to be documented”.Â
AI returned a lot of content, some of which was accurate and other parts not so much. But after applying the 5W2H method, I decided to use it as a baseline, adjust it based on the company's testing practices, and use feedback from experienced testers and existing documentation spread across the teams on Confluence.
When 5W2H is useful with AI prompts and output reviewÂ
I have found that 5W2H works well when validating AI-generated documentation, because it pushed me to check whether the content reflects how things truly work, not just how they are described in theory.
It also becomes essential when context, ownership, and real-world impact matter. AI can generate structured text, but it rarely captures who is responsible, what constraints exist, or how decisions affect teams in practice. Using 5W2H shifts the exercise from producing polished content to building alignment and shared understanding, which is where quality work actually happens.
In other words. Use AI to reduce effort and speed up the processes. Use 5W2H for the truth.
To sum up
AI is neither an ally nor an enemy to Quality Engineers. It becomes one or the other based on how we ask questions. Quality has never been about producing more content. It’s about producing the right content, for the right context, with human accountability. And that’s something no AI can replace.
What do you think?Â
AI used as content assistance can unblock communications previously difficult to solve, but how do you make it work to speed up and keep your judgment? Share your thoughts, ideas, or experiences.
Reflect on these questions
- Have you ever delivered something AI-generated that sounded right… But wasn’t usable in reality?
- Do you think the market is overvaluing technical and automation skills while undervaluing critical thinking?
- Have you ever had to reverse or amend a decision originally based on an AI suggestion?
- Have you ever used a framework to validate AI-generated content?
- Could 5W2H become a standard practice in development teams when using AI?
- Is AI making Quality Engineers better professionals… or just faster?
- Are companies replacing Quality Engineers with AI, or replacing critical thinking with speed?
- Which would you choose? A highly technical tester with little business vision, or a strategic tester with limited automation skills?
As a final reflection, I’ve realised that AI only helps when you stay engaged in the thinking process. Always question its output, analyse it, and work in smaller steps instead of asking for everything in one go. The goal isn’t to get the answer faster. It’s to make sure the answer actually makes sense.
Thank you for reading it. Please leave a comment below.