Shadow testing enables engineers to run testing or a dress rehearsal for your code in production, but without any risk to real users.
You take a new version of a service or feature and run it in parallel with the live version. It gets the same real-world traffic or data, but its results are not shown to users and don’t affect the system, the data can even be excluded from analytics or specific reporting.
Why it’s useful for testers:
You take a new version of a service or feature and run it in parallel with the live version. It gets the same real-world traffic or data, but its results are not shown to users and don’t affect the system, the data can even be excluded from analytics or specific reporting.
Why it’s useful for testers:
- You can compare outputs between old and new versions
- It helps catch unexpected bugs or performance issues before a full rollout
- It’s great for testing machine learning models, API changes, or refactored logic
Real world example: testing the purchase of consumer finance quotes against cars on eCommerce sites, this enables payments, checks on specified car data and interaction with third parties that will not impact financial reporting, user PII or governance standards. This could be done using a GUI launched from a comms tool or CLI, or feature flag for example.