A Tester's Guide to Testing AI Applications | Bill Matthews
A Tester's Guide to Testing AI Applications
Self-driving cars, intelligent digital assistants, making parole decisions, detecting, developing new treatments and drugs, reviewing legal contracts and documents – the use of AI technology is increasingly finding its way into mainstream software development. If you are not currently working on a product or project that includes AI technology, it’s highly likely that you will be within the next 5 years.
While there are a great many resources covering AI from a development perspective there are few that deal with testing AI based software despite the challenges and risks this poses. For example:
How do we test software…
- When the decision logic is not clearly defined?
- When the results may be wrong and that’s OK sometimes
- That learns and adapts based on interactions
- To find the problems that will matter when the input domain is complex and massive
- That may need to collaborate or compete with other AI software
These are just some of the challenges of testing AI based software and while traditional software test design ideas can help but are generally not enough to explore the capabilities of AI Based software.
In this masterclass we:
- Look beyond the AI hype to form a pragmatic and useful perspective of what AI is
- Examine how testing an AI-based application differs from more traditional applications
- Explore some strategies to help you tackle testing of AI based applications
No prior experience of AI is required for this masterclass.
Bill has been a freelance test consultant for over 20 years working mainly on complex migration and integration projects as a Test Architect, Coach and Technical Tester Lead.
He is a regular speaker at testing conferences mainly on technical topics as well as delivering workshops and training courses focusing on automation performance, reliability and security testing in different contexts such as web, APIs and mobile application security.