TestBash Essentials Brighton 2019

The inaugural TestBash Essentials software testing conference took place in Brighton on 3rd April 2019 as part of the week-long extravaganza that is TestBash Brighton. The aim of this new conference was to provide an introduction to the world of software testing for those that are newer to testing, and the feedback from our 200+ attendees was that it was just what they needed!

All the talks were recorded and have been made available in this series, some are free to watch and others require Pro Membership... so get stuck in!

Join the discussion about TestBash Essentials over at The Club.

We would like to thank our TestBash Essentials 2019 event sponsors, American Express, TAB, Legal & General and IQVIA for supporting this software testing conference and the software testing community.

If you would like to attend a TestBash software testing conference or any of our events then please check our latest schedule on our events page.

Watch all the talks from the event:
TestBash Essentials Brighton 2019



Wednesday, 3rd April 2019

There’s a tool to aid our testing that we all have access to all the time. It’s hidden in plain sight, you just have to know the secret code to get to it. This secret tool? The browser’s developer tools, of course! Learn how the developer tools in your browser can give you insight into what your application is really doing, access to artifacts vital to testing (like cookies and cache), and learn to speak to your application directly, like never before. Unlock a whole host of information about your application, and release your inner super sleuth tester

Hilary Weaver-Robb is a software quality architect at Detroit-based Quicken Loans. She is a mentor to her fellow testers, makes friends with developers, and helps teams level-up their quality processes, tools, and techniques. Hilary has always been passionate about improving the relationships between developers and testers, and evangelizes software testing as a rewarding, viable career. She runs the Motor City Software Testers user group, working to build a community of quality advocates. Hilary tweets (a lot) as @g33klady, and you can find tweet-by-tweet recaps of conferences she’s attended, as well as her thoughts and experiences in the testing world, at g33klady.com.

Welcome to Testbash... Don't Panic. Seriously... that comes way later. 

I will be your guide in this special place. Testers have come here from across the globe (or flat plane... depending on your physics preferences) to exchange ideas, trade in techniques and generally immerse themselves in the world of software testing today. But what are these terms we hear?

"Test Charters"?



"What do you mean by that's checking, not testing"?

Seriously, where's my babel fish? How am we supposed to understand all this amazing ideas if we are using terms that nobody has ever used back home?

That's where the Guide comes in. This immersive, interactive and friendly book will help you navigate every new challenge you might encounter while exploring the world of Testbash. From the great debates of automation, to the dangers of testcases... from the fall of waterfall to the rise of agilefall...  the Guide will help establish a safe baseline of where we are, how we got here, and why it might matter. Also, the Guide come with a nice head called Martin who talks to you so you don't have to read it.

Now, sit back and enjoy the conference.


With over fifteen years of specialization in software testing and development, Martin Hynie’s attention has gradually focused towards embracing uncertainty, and redefining testing as a critical research activity. The greatest gains in quality can be found when we emphasize communication, team development, business alignment and organizational learning.

A self-confessed conference junkie, Martin travels the world incorporating ideas introduced by various sources of inspiration (including Cynefin, complexity theory, context-driven testing, the Satir Model, Pragmatic Marketing, trading zones, agile principles, and progressive movement training) to help teams iteratively learn, to embrace failures as opportunities and to simply enjoy working together.

Are you in the proper mode to find your median, or do you just feel mean? Whether you are trying to judge if your application is ready for Christmas shopping season or validating a machine learning algorithm, an understanding of statistics can help you be a more effective tester. In this talk, Amber will draw on nearly 20 years of experience with testing everything from handwriting recognition algorithms to high volume game APIs to show how a few basic statitical concepts can move your testing beyond just average.

Amber Race is a Senior SDET at Big Fish Games. After majoring in Asian Studies, teaching in Japan, and travelling the world, she happened into software testing and has been loving it ever since. She has nearly 20 years of testing experience at Big Fish and Microsoft, doing everything from manual application testing to tools development to writing automation frameworks for web services. Amber has worked on a wide variety of products and written automation in C#, C++, Python, and Java. She currently specialises in test automation and performance testing for high volume back-end services supporting iOS and Android games.

Those teams working in an agile fashion will usually bring the tester in as early as possible in the development cycle — often during the planning stages — to find potential problems before they create work to fix. But checking for potential technical problems is only a small part of what the QA team can do in this stage.

The QA team has a wide scope to make the product as good as it can be. This allows the tester to use not just their technical knowledge, but their non-technical knowledge, in their quest for quality. 

In this talk, we will be outlining those non technical disciplines that a tester has, from historian to lawyer, and even spy. Testers will come away from this talk full of ideas of questions to ask of their product, while other members of the team will come away with a greater understanding of the knowledge a good tester can bring to the table.

Items covered will include accessibility, data protection, misuse of a product, and being culturally sensitive.

Daniel is a software tester who enjoys talking about QA and what's wrong with everything to anyone who will listen. Daniel is active in the PHP and testing communities, and when he isn't testing and breaking your hard work, he writes quality code. Daniel enjoys writing code following best practices, and never stops learning about them. He loves sharing his findings with others in the community. Previous employers include Xing and the BBC, where he was the test lead for the BAFTA-nominated CBeebies Storytime and the tester for the 30th anniversary version of the Hitchhiker's Guide To The Galaxy game.

When I started in my new team, I realised that I was the first tester the team had had. Up until that point, the only testers that were involved with their project were the ones that tested in Production after they were finished building the feature.

The thing is, testability had never been an issue for this team - until I joined. My first few weeks was spent asking for things, asking for test data, asking for more test data - eventually asking to learn how to create test data (from a separate team). I asked to learn how to edit the html, to test different scenarios because the test data was missing. I wanted access to github so I could see the pull requests being done to address different stories in JIRA.

I wanted it all! I was a demanding tester.

And after a while, things got easier. I was even able to help developers make testability easier on their local machine, so they could debug faster.

I want to share my story on how I learned to ask for increased testability, and how I learned what exactly testability means, after I realized this what I had been focussing on this whole time.

I'm Nicola - a Test Consultant with House of Test Sweden. As a tester, I'm constantly looking for ways to learn, grow and adapt. In the past, I have worked on projects in various industries including Education, Retail and e-Commerce.I was the founder of the Stockholm Software Testing Talks meet-up and a co-founder of the WeTest Auckland testing meet-up. I was also a co-instructor for the BBST Foundations course. If you want to read my thoughts on software testing, feel free to check out my blog: http://nickytests.blogspot.co.nz

This session would break down some techniques on how to approach researching for a given domain(s) which involve the software under test.

This session would seek to give the attendee a way to develop simple personas, understand market pressures, and research competitors.

This could be considered a mini workshop or interactive talk. The goal would be to get the audience to do quick research katas which might be shared later with their development groups.

Katas could be, but aren't limited to:

  • Domain identification
  • User persona/Typical User identification
  • Competitive Analysis:  Looking for domain competitors or near domain competitors. Also includes future feature identification.
  • General Domain research: What information is out there about your company. What information is out there about your competitors. How do I find it.
  • Internal research techniques: Who can I talk to about the product outside of development. What information is available internally. What sales pitches, informational materials, or help guides tell you or your users. Are they correct?

The goal would be to tie these activities together to testing. As an informed tester is a better tester. Likewise, an informed development group is a better development group.

Melissa Eaden has worked for more than a decade with tech companies such as Security Benefit, HomeAway, ThoughtWorks, and now Unity Technologies. Melissa’s previous career in mass media continues to lend itself to her current career endeavors. She enjoys being EditorBoss for Ministry of Testing, supporting their community mission for software testers globally. She can be found on Twitter and Slack @melthetester.

After spending countless hours in testing GUI and backend, numerous bugs are encountered in production. What’s missing? Due to crunch of resources and time, API testing is generally skipped and that’s where lot of bugs resides. However, noticing the increase in number of APIs used for development of years, it’s crystal clear that API testing is the new king!

GUI testing revolves around user’s experience, look & feel of the product. Can we justify applying same approach for testing APIs?

If yes, how? If no, then what crucial scenarios should be covered, what are prerequisites needed, what is a must to-do checklist for API tests? If you are less associated with API and lack answers, don’t worry. You are not alone.

Shivani had faced similar situation when she didn’t even know “A” of API testing. Based on her experiences at Kreditech and XING, she’ll explore what the API testing is all about, its core values, test strategy, common API testing mistakes and how to avoid them. Join the talk to know about her tale of shifting perspective from browsers, button, textboxes to requests, response and endpoints.


  • How to perform API testing keeping in mind user’s perspective.
  • Tools that can make life easier when it comes to API
  • Art of using techniques of UI testing in API testing
  • How to motivate your team for contribution in stable APIs
Shivani Gaba
Shivani is a passionate QA Engineer who believes that knowledge sharing boost up all engaged parties and increases their confidence. It was summer of 2013 when Shivani and “testing” met each other first time and are best friends since then. Holding rich experience in testing domain, she currently works as Senior QA Engineer with XING (the largest business network in German speaking countries). With hands-on in all layers of software testing ranging from UI(frontend), API and backend, functional, non-functional , mobile testing - API remains her all-time favourite. As a certified scrum master, working in agile manner is always her approach. She believes in idea of spreading her findings about any “new fancy stuff” she learns. She has worked with multiple international teams and brings forward idea of whole team contributing for quality. She's always up for conversation over twitter, email, linkedin, Xing or beer table :) Linkedin Xing

Ever wanted to test all the things but forgot something along the way? 

Ever wanted to look at the vast testing universe and plan what direction you want to go or what you want to learn next?

Ever wondered how much you know or wanted to measure or evidence this to someone? 

Well now there’s a one-stop shop to help you remember, plan your learning or show someone what you know.  A resource that will not only inform your decisions but hopefully inspire them!  A source so awesome it will let you look at your project from not just a test but from a manual, technical and a personal perspective too.  A visual heuristic that can help shape your learning, show the value testing brings and can assist in identifying the ‘what and how’ of a testing strategy or approach.

Introducing the Periodic Table of Testing.  A visual heuristic of the testing universe covering everything from manual to technical testing, from personal drivers to work methodologies.  In the beginning, I was a business user who did some testing.   It intrigued me and became my career.  I was ok, I tested what it should do.  The more I got into testing the more I read and learned, the more sophisticated I thought my testing became.  But essentially, the more things I uncovered lead to even more things I didn’t know I didn’t really know! The more I read the more confused I became about what there was, how it all connected and what paths of learning I could/should follow.  When to use what, how much of something should you be aware of, particularly specialisms like security.  There seemed to be multiple opinions, often contradictory, about what I should and shouldn’t be learning.  So I started making my own ‘list’ which grew and grew.  I tried various ways to visualise the information until I tried the table and that seemed to work well.

As well as highlighting how the table can be used I’ll share some of my experiences such as; the first time I had to do some testing on user access I thought I’d done well, thinking of different scenarios… until there was a production issue!  I’d covered user access but not all the roles.  From the investigation I found there were individual and service account permissions that were not taken into consideration.  That was when the ‘UA – User Access / Permissions / Roles’ element was born.  This led me on to thinking about penetration testing and looking at possible attacks.  Had the table been around then, I would have been able to at least ask questions that might have helped identify our test environment was set up quite differently to live.  

The long term goal of the table is several fold.

  • To make the viewer aware of the possibilities available to new, current and potential testers
  • To help shape learning paths
  • To help people show their current development
  • As a reminder or prompt when creating tests
  • As a support for test advocacy showing the multi-dimensional range of testing
  • To use as a basis to identify the potential scope of testing in projects 
  • To start conversations and provoke thought

In creating the table and developing it to the point where it adds value to the above I’ve also been able to categorise the table into three distinct areas.  Those that really must be considered for every project, fundamentals, accessibility, data, etc. Those that should be considered dependent on the project, operating systems, capacity etc.  As well as those that could be considered. As part of the session you will also learn that no technique or ‘element’ lives in isolation.  Personas for example expose user journeys and tours. Accessibility testing exposes poor design and usability.  I’ll work through the tables sections explaining their aim, why they are split in that way and looking forward, what possible inclusions or changes are under consideration.  I’ll show how I think the table can be used to scope projects, direct your learning, help you gather evidence for your end of year reviews and even create better job descriptions and advertisements.

I’m not looking to make money by selling this.  I don’t even have advertisements on my blog, I just want to share my ideas to help people and keep improving it so it can be of more value to more people.  

Call me Ady! With an audit and management systems background, I came to testing in 2002 and found both a flare and a passion. With a thirst for learning and continually improving I'm always looking for ways to share information from my blog to a monthly newsletter and global quality initiatives at work. I'm a great believer in community and help the great one in Leeds by helping organise the MoT Meetup. I believe accessibility is fundamental to applications and greatly undervalued causing many problems for lots of users. I'm trying to spread the message that accessibility isn't about disability, its about inclusion.

For a while now, the motto for agile testers has been: ‘acquire more technical skills so you can support the team better’. However, you almost never hear the motto for developers: ‘improve your testing skills so you can support the team better’. Even in an agile context, you still see testers doing the bulk, if not all, of the non-automated testing.

But wouldn’t it be more effective if the testers teach the whole team how to explore? Can you imagine the power of a whole team being able to test an application effectively?

Exploratory Testing is great to teach others because it uses everybody’s unique point of view, experience, and biases when interacting with the application under test. As a tester, you can take the lead in switching your teams' mindset with regards to quality and test responsibility, even if you are a junior tester. You can do more than you think!

In this talk, I will share how I started this challenge with my team by organising Exploratory Testing Sessions. What was successful? What was hard? My goal for this session is to inspire you to try this out with your own team. 

Maaike is an independent agile tester. She loves testing because there are so many ways to add value to a team, be it by thinking critically about the product, working on the team dynamics, working to clarify the specs and testability of the product, getting the whole team to test with Exploratory Testing…the options are almost endless! She likes to help teams who are not sure where or what to test. After reading “Thinking, Fast and Slow” by Daniel Kahneman she developed a special interest in the role of psychology in software development. During ‘analogue time’ Maaike likes to practice yoga, go for a run, check out new local beers, play her clarinet and travel with her boyfriend.