Software Testing Live: Episode 07 - Testing transparently banner image

Software Testing Live: Episode 07 - Testing transparently

Explore the boundaries of AI-generated software by live-testing a word search tool and using collaborative techniques to identify gaps in logic despite passing all initial automated tests.

In this session, James Lyndsay and Ben Dowen will explore generated software. At the start of the session, we’ll generate a subject (a tool to build word searches) from its tests. We’ll explore it together to show you what we might do, and we’ll invite you to explore, too.

We’ll use a variety of techniques and tools to explore to find out more about it – starting from the knowledge that it passes all its tests. We’ll talk about what we’re doing as we do it, exchange ideas, and extrapolate from our discoveries to refine what we know and to identify new ways to explore.

You’ll have access to the software (and its tests) too. We’ll give you time to explore, and we’ll play (with you) with what you’ve found.

You can see the (current) set of tests at WordGrid_F Tests. We’ll build a new subject on the day just for this session.

18:30 - 21:00 BST
Location: Online
Subscribe to our newsletter
We'll keep you up to date on all the testing trends.