On the 31st January 2018, testers joined the fourth Ministry of Testing #TestChat to discuss all things AI Testing. Our questions were designed by Áine McGovern and Heather Reid.
We had four questions during the live chat:
- What crossover is there with current testing and testing AI? What existing testing methods can be used to test AI?
- Part of testing AI involves validating the output, how would you prepare your test data for an AI application? For example, Bill Mattews discussed using Monte Carlo method to build test data.
- As AI is a continuously learning system, one concern, is how to confirm that the test results are reached in the "right" way. For example, are multiple tests needed as proof? Also, how do we deal with negative tests? Will this impact the AI?
- What skills do you believe are necessary to get a job as an AI tester? Is it a case of building on existing knowledge or exploring new emerging technology? Or both?
There was a lot of interesting discussion happening around how best to tackle AI testing, so many great ideas shared. Lots of food for thought. A must read.
If reading through the chat inspires you, head over to The Club and post some of your own questions on the Exploratory Testing thread.