Self greening is a term used to describe a situation where AI automatically “fixes” or adjusts tests so that they pass, potentially hiding genuine problems that should have caused them to fail. It’s a side effect of AI-driven test maintenance or “self-healing” systems that focus on achieving green (passing) test results, sometimes at the expense of meaningful accuracy or visibility into real issues.
Here’s what it looks like:
Here’s what it looks like:
- You run an automated accessibility scan.
- The AI finds two issues.
- You tell it to keep fixing until tests pass.
- It changes the test to expect “2 issues found.”
- The test passes.
Now, if a later version adds two new issues, the AI updates the expected count to 4, and the test still passes. You get a clean dashboard. But the product is breaking quietly. Self-greening gives you a false sense of stability. Everything looks green, but the tests are no longer testing anything useful.
While AI can be guided or configured to restrict healing to safe areas (like element identifiers or path changes), out-of-the-box implementations often risk self-greening, trading genuine test insight for the illusion of stability.
You can reduce self-greening risks by:
While AI can be guided or configured to restrict healing to safe areas (like element identifiers or path changes), out-of-the-box implementations often risk self-greening, trading genuine test insight for the illusion of stability.
You can reduce self-greening risks by:
- Limiting where AI can apply self-healing (for example, only to locators or paths).
- Reviewing AI-made changes before merging them.
- Tracking the difference between AI-fixed and human-reviewed test results.
- Treating “all-green” reports with healthy skepticism.