Rahul Parwal
Test Specialist
I am Open to Write, Teach, Speak, Mentor, CV Reviews
Rahul Parwal is a Test Specialist with expertise in testing, automation, and AI in testing. He’s an award-winning tester, and international speaker.
Want to know more, Check out testingtitbits.com
Achievements
Certificates
Awarded for:
Achieving 5 or more Community Star badges
Activity
earned:
1.0.0 of A tester’s role in continuous quality
earned:
11.9.0 of MoT Software Testing Essentials Certificate
earned:
11.7.0 of MoT Software Testing Essentials Certificate
earned:
2.1.0 of MoT Software Quality Engineering Certificate
earned:
Lesson 2 of Advanced prompting for testers
Contributions
Happy New Year, 2026
2025 didn’t happen to me. I showed up and built it.
Spoke across continents. Won Best Tutorial at EuroSTAR 2025. Got a book mention. Became an ambassador. Launched courses. Played where it matte...
2025 didn’t happen to me. I showed up and built it.
Spoke across continents. Won Best Tutorial at EuroSTAR 2025. Got a book mention. Became an ambassador. Launched courses. Played where it matte...
Sometimes you drop something on your team’s desk and expect a polite nod.
Then there are moments like this where you bring in The Testing Planet and the entire ifm engineering testing team goes ...
Boost your career in quality engineering with the MoT Software Quality Engineering Certificate.
A framework that helps the AI gather the right context and not work as an alien.
R – Role: Ask the AI system to “act as a role”.
I – Instructions: This is the instruction that you give to the AI system.
C – Context: Describe the context about the purpose, feature, application, or system.
E – Example: Provide a sample of your expected output.
Q – Questions: Tell the AI system to ask you clarifying questions before answering or hallucinating.
after so much waiting and anticipating , I am returning to my career after a full time parenting break.
These 2 years were full of mixed feelings and self-doubts but I owe this community to back...
Self greening is a term used to describe a situation where AI automatically “fixes” or adjusts tests so that they pass, potentially hiding genuine problems that should have caused them to fail. It’s a side effect of AI-driven test maintenance or “self-healing” systems that focus on achieving green (passing) test results, sometimes at the expense of meaningful accuracy or visibility into real issues.Here’s what it looks like:
You run an automated accessibility scan.
The AI finds two issues.
You tell it to keep fixing until tests pass.
It changes the test to expect “2 issues found.”
The test passes.
Now, if a later version adds two new issues, the AI updates the expected count to 4, and the test still passes. You get a clean dashboard. But the product is breaking quietly. Self-greening gives you a false sense of stability. Everything looks green, but the tests are no longer testing anything useful.While AI can be guided or configured to restrict healing to safe areas (like element identifiers or path changes), out-of-the-box implementations often risk self-greening, trading genuine test insight for the illusion of stability.You can reduce self-greening risks by:
Limiting where AI can apply self-healing (for example, only to locators or paths).
Reviewing AI-made changes before merging them.
Tracking the difference between AI-fixed and human-reviewed test results.
Treating “all-green” reports with healthy skepticism.
Rahul Parwal is doing his thing!