TestBash Netherlands 2019

TestBash Netherlands, our only software testing conference in the Netherlands was back on the 23-24th May 2019, and it was our biggest event in the Netherlands to date with over 200 attendees!

We had a jam-packed two days this year, with nine talks on the TestBash day plus the famous 99-second talks!

We record all our TestBash talks and make them available on The Dojo. Some are free to watch and others require Pro Membership. Here are all the TestBash Netherlands talks, get stuck in!

Join the discussion about TestBash Netherlands over at The Club.

We would like to thank our TestBash Netherlands 2019 video sponsors Applitools and Valid.

If you would like to attend TestBash or any of our events then please check our latest schedule on our events pages.

Watch all the talks from the event:
TestBash Netherlands 2019



Monday, 20th May 2019

What Do We Mean By ‘Automation in Testing’?

Automation in Testing is a new namespace designed by Richard Bradshaw and Mark Winteringham. The use of automation within testing is changing, and in our opinion, existing terminology such as Test Automation is tarnished and no longer fit for purpose. So instead of having lengthy discussions about what Test Automation is, we’ve created our own namespace which provides a holistic experienced view on how you can and should be utilising automation in your testing.

Why You Should Take This Course

Automation is everywhere, it’s popularity and uptake has rocketed in recent years and it’s showing little sign of slowing down. So in order to remain relevant, you need to know how to code, right? No. While knowing how to code is a great tool in your toolbelt, there is far more to automation than writing code.

Automation doesn’t tell you:

  • what tests you should create
  • what data your tests require
  • what layer in your application you should write them at
  • what language or framework to use
  • if your testability is good enough
  • if it’s helping you solve your testing problems

It’s down to you to answer those questions and make those decisions. Answering those questions is significantly harder than writing the code. Yet our industry is pushing people straight into code and bypassing the theory. We hope to address that with this course by focusing on the theory that will give you a foundation of knowledge to master automation.

This is an intensive three-day course where we are going to use our sample product and go on an automation journey. This product already has some automated tests, it already has some tools designed to help test it. Throughout the three days we are going explore the tests, why those tests exist, our decision behind the tools we chose to implement them in, why that design and why those assertions. Then there are tools, we'll show you how to expand your thinking and strategy beyond automated tests to identify tools that can support other testing activities. As a group, we will then add more automation to the project exploring the why, where, when, who, what and how of each piece we add.

What You Will Learn On This Course

To maximise our face to face time, we’ve created some online content to set the foundation for the class, allowing us to hit the ground running with some example scenarios.

After completing the online courses attendees will be able to:

  • Describe and explain some key concepts/terminology associated with programming
  • Interpret and explain real code examples
  • Design pseudocode for a potential automated test
  • Develop a basic understanding of programming languages relevant to the AiT course
  • Explain the basic functionality of a test framework

Day One
The first half of day one is all about the current state of automation, why AiT is important and discussing all the skills required to succeed with automation in the context of testing.

The second half of the day will be spent exploring our test product along with all its automation and openly discussing our choices. Reversing the decisions we’ve made to understand why we implemented those tests and built those tools.

By the end of day one, attendees will be able to:

  • Survey and dissect the current state of automation usage in the industry
  • Compare their companies usage of automation to other attendees
  • Describe the principles of Automation in Testing
  • Describe the difference between checking and testing
  • Recognize and elaborate on all the skills required to succeed with automation
  • Model the ideal automation specialist
  • Dissect existing automated checks to determine their purpose and intentions
  • Show the value of automated checking

Day Two
The first half of day two will continue with our focus on automated checking. We are going to explore what it takes to design and implement reliable focused automated checks. We’ll do this at many interfaces of the applications.

The second half of the day focuses on the techniques and skills a toolsmith employs. Building tools to support all types of testing is at the heart of AiT. We’re going to explore how to spot opportunities for tools, and how the skills required to build tools are nearly identical to building automated checks.

By the end of day two, attendees will be able to:

  • Differentiate between human testing and an automated check, and teach it to others
  • Describe the anatomy of an automated check
  • Be able to model an application to determine the best interface to create an automated check at
  • How to discover new libraries and frameworks to assists us with our automated checking
  • Implement automated checks at the API, JavaScript, UI and Visual interface
  • Discover opportunities to design automation to assist testing
  • An appreciation that techniques and tools like CI, virtualisation, stubbing, data management, state management, bash scripts and more are within reach of all testers
  • Propose potential tools for their current testing contexts

Day Three
We’ll start day three by concluding our exploration of toolsmithing. Creating some new tools for the test app and discussing the potential for tools in the attendee's companies. The middle part of day three will be spent talking about how to talk about automation.

It’s commonly said that testers aren’t very good at talking about testing, well the same is true about automation. We need to change this.

By the end of day three, attendees will be able to:

  • Justify the need for tooling beyond automated checks, and convince others
  • Design and implement some custom tools
  • Debate the use of automation in modern testing
  • Devise and coherently explain an AIT strategy

What You Will Need To Bring

Please bring a laptop, OS X, Linux or Windows with all the prerequisites installed that will be sent to you.

Is This Course For You?

Are you currently working in automation?
If yes, we believe this course will provide you with numerous new ways to think and talk about automation, allowing you to maximise your skills in the workplace.
If no, this course will show you that the majority of skill in automation is about risk identification, strategy and test design, and you can add a lot of value to automation efforts within testing.

I don’t have any programming skills, should I attend?
Yes. The online courses will be made available several months before the class, allowing you to establish a foundation ready for the face to face class. Then full support will be available from us and other attendees during the class.

I don’t work in the web space, should I attend?
The majority of the tooling we will use and demo is web-based, however, AiT is a mindset, so we believe you will benefit from attending the class and learning a theory to apply to any product/language.

I’m a manager who is interested in strategy but not programming, should I attend?
Yes, one of core drivers to educate others in identifying and strategizing problems before automating them. We will offer techniques and teach you skills to become better at analysing your context and using that information to build a plan towards successful automation.

What languages and tools will we be using?
The current setup is using Java and JS. Importantly though, we focus more on the thinking then the implementation, so while we’ll be reading and writing code, the languages are just a vehicle for the context of the class.

Mark Winteringham

I am a tester, coach, mentor, teacher and international speaker, presenting workshops and talks on technical testing techniques. I’ve worked on award winning projects across a wide variety of technology sectors ranging from broadcast, digital, financial and public sector working with various Web, mobile and desktop technologies.

I’m an expert in technical testing and test automation and a passionate advocate of risk-based automation and automation in testing practices which I regularly blog about at mwtestconsultancy.co.uk and the co-founder of the Software Testing Clinic. in London, a regular workshop for new and junior testers to receive free mentoring and lessons in software testing. I also have a keen interest in various technologies, developing new apps and Internet of thing devices regularly. You can get in touch with me on twitter: @2bittester

Richard Bradshaw
Richard Bradshaw is an experienced tester, consultant and generally a friendly guy. He shares his passion for testing through consulting, training and giving presentation on a variety of topics related to testing. He is a fan of automation that supports testing. With over 10 years testing experience, he has a lot of insights into the world of testing and software development. Richard is a very active member of the testing community, and is currently the FriendlyBoss at The Ministry of Testing. Richard blogs at thefriendlytester.co.uk and tweets as @FriendlyTester. He is also the creator of the YouTube channel, Whiteboard Testing.

Thursday, 23rd May 2019

All Day Sessions | 9:00am - 5:30pm

Is there still a purpose for a Test Strategy. In the fast modern world, do we really need one?

In this tutorial we will guide you through the most important elements of a Test Strategy. And related to this: Test Management. Think of Risk Based Thinking, Stakeholder Management and Test Approach. How will different situations have an effect on the Test Strategy? What is the role of Test Management in this? What is the role of Testers and Test Managers?

During this tutorial we will look at Test Strategy and Test Management in different contexts; one facing innovative impulses but also the more conservative ones. How will they be affected?

And we will find out that whatever your context, Test Strategy is still important. Maybe the format in which we describe elements is different or the time spend on these elements varies, but we still need them.

This Tutorial will be full of practical exercises and there will be plenty of room for questions and discussions. All to understand the purpose and need of a relevant Test Strategy.

Greet Burkels

Greet Burkels is an experienced Test Program Manager at Professional Interim & Program Management who has led large programs within various (international) corporations. Programs related to mergers, system integration but also significant organizational changes. She has obtained the Practioner certificate for Managing Successful Programs to complete the experiences gained over the last twenty-some years.

Iris Pinkster - O'Riordain

Iris Pinkster - O'Riordain is test advisor at Professional Testing and has experience in testing and test management since 1996. She co-developed Logica's method for structured testing: TestFrame ®, their test management approach and TestGrip, the method on test policy and test organization. She is co-author of the books published on these topics. She often speaks at (inter) national conferences. In 2007 she won the EuroSTAR award for “Best Tutorial”. In 2017 she was Program Chair for EuroSTAR.

Despite test automation having been around for decades now, many teams still seem to have a hard time succeeding with its implementation. Automation projects are abandoned, or even worse, teams spend way too much time trying to keep their automation up and running where abandonment (or at least a trip back to the drawing board) wouldn’t be such a bad idea...

In this workshop, I’ll introduce you to a number of automation projects I have been working on (or that I have been introduced to otherwise) that had, let’s be gentle, plenty of ‘room for improvement’.

You will then be asked to answer a number of questions for each of these projects, such as:

  • Where did it go wrong?
  • What do you think was the root cause of the problem(s)?
  • Was there one single root cause? Or was it a matter of multiple, related issues?
  • How could these problems have been prevented?
  • Could these problems have been prevented?

You will also be asked to score (and motivate your score) the project you have seen on several factors, including:

  • Expectations and how they were managed
  • The chosen automation approach
  • The required and actual skill set of the people involved
  • The automation outcomes and its added value

Answering these questions for the given projects will help you to gain insight into common problems with test automation projects, as well as how to mitigate (or better, avoid) them.

You’ll also learn to ask the right questions when it comes to test automation projects, which will help you be more successful in your own test automation journey.

You’ll be investigating the featured projects in several rounds in smaller groups, which will allow you to learn not only from us, but also from your fellow participants. You’ll refine and present your findings and recommendations in several rounds, allowing you to learn as you go.

In the afternoon, after having learned about some of the problems that can cause test automation efforts to fail, you’ll create and pitch proposals on how to do better, sharing your learnings and suggestions with the other attendees and applying them to your own situation.


Takeaways / next steps:
After attending this workshop, you’ll have insight into what makes and breaks test automation projects. You’ll have compiled a list of questions and things to take into account that you can apply to your own current or future test automation efforts, in your specific situation. You’ll walk away with practical ideas and actionable items that you can directly apply to your own automation efforts the moment you return to work.

Target audience:
This workshop is targeted towards testers, developers, team leads and others that rely on test automation (or are looking to do so) as part of their software development and testing efforts and those who want to learn how to recognize and mitigate common pitfalls with regards to test automation implementation.

Bas Dijkstra

I'm an independent test automation consultant with over 10 years of experience helping my clients improve their testing efforts through smart application of tools. A typical work week for me consists of a mixture of coding, consulting, writing and teaching, which is just the way I like it.

I love to unwind by going for a run or sitting down with a book and a glass of wine. I live in Amersfoort, the Netherlands, together with my wife and two sons.

APIs lend themselves extremely well to test automation. There's a simple reason for this: APIs are designed to be consumed by computers, not by people. So where with UI tests, you need to rely on testing-specific libraries such as Selenium WebDriver, with API testing you can simply use the same libraries as the ones that are used to build applications. This also means that building an API testing framework is significantly easier than building one for UI testing. So why let's not just do that in a workshop?

You'll be able to experience how easy it is - assuming you have some very basic general programming knowledge and know what an API is. We'll build our framework using Python, Pytest and the requests library. More importantly, we'll explore and discuss what your framework needs to be able to do to support the full lifecycle of a test suite. Because we see all too often a focus on how easy it is to build tests and little mention of maintainability or readability of test results.


After this workshop you'll have a basic API testing framework you can extend (we will be giving tips for next steps). More importantly, you'll have a better understanding of testing frameworks in general. And that will come in handy when you have to evaluate a testing tool, or when you are discussing a framework with the developers that build it.

Elizabeth Zagroba
Elizabeth Zagroba is a Test Engineer at Mendix in Rotterdam, The Netherlands. She was the keynote speaker at Let’s Test in South Africa in 2018, and she’s spoken at TestBashes and other conferences around North America and Europe. Her article about mind maps became one of the most viewed on Ministry of Testing Dojo in 2017. You can find Elizabeth on the internet on Twitter and Medium, or you can spot her bicycle around town by the Ministry of Testing sticker on the back fender.
Joep Schuurkes
Joep wandered into testing in 2006. Two years later he discovered context-driven and realized that testing is actually not that bad a fit for a philosopher. In 2015 he joined Mendix, one reason being that he wanted to increase his technical skills. Since then he's become more and more involved in automation, while also taking on the roles of scrum master, then team lead, and as of October 2018 tech lead testing. Somewhere along his journey (some say it was 2013) he started speaking at conferences and joined DEWT. Outside of work his thoughts tend to revolve around coffee, fountain pens and bouldering.

Coaching is a powerful tool. It can help us understand how we think; support transitions from current reality to a new reality; even nurture our awareness and emotional intelligence. The very nature of coaching is about problem-solving and helping people manage change. However, in many organisations it still tends to be seen only as a tool for individual personal development.

Every day, we face all kinds of challenges, at all levels our organisations: individual, team, department, community or company. To make things even more complex, with the number of industries and organisations turning to agile ways of working and aspiring for collaborative, empowered and self-organising teams, we tend to forget the impact and importance of paying attention to how teams work – and more importantly, how teams learn and grow.

Just like individuals who need coaching to help them on their personal journey, teams are no different. Teams need the space to understand and learn how to adapt and grow in different environments (especially those going through organisational change) aligning their individual skills and experiences working towards their goals and in extension their purpose.

In this workshop our focus will be on experiential coaching techniques and approaches that can help empower teams to adapt to such scenarios. Whether you’re a tester, developer, product owner or in an executive leadership position, this workshop is designed to allow everyone who attends the chance to explore the ideas behind experiential coaching and learning. Throughout the workshop we will dive into a series of exercises and discussions focused on:

  • Understanding of why experiential coaching can be effective way;
  • Live experiments that explore the nature of experiential coaching focused on team dynamics and how to apply some of these techniques;
  • Incorporating established coaching techniques, such as powerful questions and clean language within a team setting;
  • How to identify and overcome challenging team dynamics and behaviours;

By the end of the session you will have a different outlook on techniques and methods of coaching in team settings, and ways in which you can enable the teams you work in and/or manage to be more effective.

Christina Ohanian

Christina is passionate about building and supporting self organising teams and individuals from the ground up. Having started her career in software testing, embedding and building communities of practice she very soon discovered that as much as she loved being a tester her purpose was destined towards a different direction. She is now an Agile Coach and an active member of the Agile community of practice. She loves coaching and learning about people, their passions and what motivates them. She speaks and run workshops and also runs her very own games event #play14 London. Christina is also a Graphics Illustrator and enjoys bringing this into the workspace to run meetings and help teams collaborate.

A one-day workshop in which you will learn about the currently most important it-security focus areas from an offensive – (pentesting) as well as a defensive perspective (how can we help security from our agile development practice). You’ll be given a theoretical basis and practice in hands-on exercises, so that you’re prepared for the final assignment of the day: forming a team and cracking the (physical) safe in a capture-the-flag event.



  • Security trends
  • What is penetration testing (offensive security)
  • Secure Applications / Threat Modeling (defensive security)
  • Security in Agile Development (defensive security)
  • Abuser-stories
  • Unit/integration tests & Security
  • Leveraging existing (functional) tests to improve vulnerability scanning


  • Pentesting: Internet reconnaissance
  • Pentesting: infrastructure & web applications
  • Pentesting: web applications
  • Pentesting: OWASP Top 10 awareness check
  • Pentesting: Playtime: "hands-on hacking"

Grand finale:
Final event: "hack the safe": capture the flag event where the attendees will form groups and battle each other using all techniques that have been discussed during the day in cracking the (physical) safe

Tom Heintzberger

I'm a test automation consultant active in the world of automating testing activities since 2005, both from a technical - as well as from the process- and a people perspective.

I help people understand what issues automation can and cannot solve in their testing endeavors and provide the technological means to automate in the most effective ways. These technological means vary from providing training and coaching on testing and programming, to implementing tools helping with automation, infrastructure and test data in CI/CD processes.

Jeffrey Jansen

Jeffrey Jansen started around 2007 with his study in digital forensics and background as software developer. After that he started to work as security specialist c.q. ethical hacker at SecureLabs (formely known as: ISSX).

In 2016 he started his company Access42, an independent cybersecurity company with consultancy and several managed security services, to make the world (a little bit) more secure. Consultancy, penetration testing, (automated) vulnerability scanning, secure code reviews, trainings, secure programming, etc. is his daily routine.

Jeffrey has a good understanding of people, process and technology and knows security fits in the modern and complex IT landscape we see today.

Although processes and tools play an important role in software testing, the most important testing tool is the mind. Like scientists, testers search for new knowledge and share discoveries—hopefully for the betterment of people’s lives. More than sixty years ago, William I.B. Beveridge reframed discussion of scientific research in his classic book The Art of Scientific Investigation. Rather than add to the many texts on the scientific method, he focused on the mind of the scientist. Join Ben Simo as he applies Beveridge’s principles and techniques for scientific investigation to software testing today. Learn to discover and communicate new knowledge that matters; to think—and test—like scientists; and to continually prepare, experiment, exploit chance, imagine productively, apply intuition and reason, tune observation, and overcome resistance. 



See testing in a new light. Take away an appreciation for your most powerful testing tool – your mind.

Ben Simo
Ben Simo, aka QualityFrog, is an amphibious time-traveling context-driven cyborg software investigator. In his nearly 30 years as a professional software tester, Ben has seen technologies and techniques come and go; while one thing remains the same: software is built by, used by, and impacts people. Ben approaches software testing as observational and experimental investigation that enables people to make better decisions that result in better software. Ben currently helps teams build better software at Medidata Solutions. Ben shares wild-caught software problems at IsThereAProblemHere.com.

Friday, 24th May 2019

As software development becomes increasingly automated, and as software becomes increasingly interconnected through the Internet of Things, what will happen to risks associated with human factors? Do they reduce, increase, or transform in ways that we can neither anticipate nor easily comprehend?

We examine the tragic events of flight AF477 to illustrate that as systems become increasingly automated, they reduce risks due to one set of human factors, but become vulnerable to an entirely different set of human factors. We show how increased automation of tasks leads to a significant reduction in numerous small errors, but this is coupled with an increased opportunity for creating a catastrophic error.

This increased opportunity for catastrophe typically comes via two routes. Firstly, there is an increase in operator load extremities, with decreased load during normal conditions, but increased load at times of crisis. Secondly, there is a de-skilling of roles, which leads to three effects:

  • A reduction in calibre of people required, and hence selected or attracted, into the role.
  • Reduced opportunities to practice techniques, as practise itself becomes more dangerous
  • The role becomes a dulled task, with employees often filling their time with non-work activities, such as Internet and mobile surfing, rather than honing job skills.

We use lessons and parallels from aviation and nuclear power generation to explore the implications of increased automation through DevOps driven software development, as well as investigating potential mitigation strategies available.


  1. As systems become more automated, there is a significant reduction in small errors.
  2. However, there is an increased opportunity for a catastrophic error.
  3. We should learn from industries, such as aviation and medicine, that have previously wrestled with this issue.
Andrew Brown
Dr Andrew Brown is a principal consultant at SQS. Recently, he has developed an independent line of research into understanding why we humans make the mistakes that lead to software defects and other problems in the software lifecycle. He has 25 years’ experience in the software industry. Previous roles include Head of QA at HMV, Head of QA at a financial software house and a test manager in Japan. He holds a degree in Physics and Maths, an MBA from Warwick Business School and a doctorate from Imperial College.

We as human beings find comfort in control. We crave a sense of balance in the world - if I am competent and capable of completing a task then I should get the result I expect. What goes around comes around, and if I do good things and I am a good person then good things will come to me.

This comfort is challenged by two universal truths - that life isn't fair, and that some things are influenced only by sheer acts of random chance. Because a lack of control makes us uncomfortable, we seek coping mechanisms. We find patterns and correlations that feed our beliefs and biases to wrap ourselves in a nice comfortable blanket of control. We can and will sit and look at large sets of uncorrelated data and find meaning in it, in order to gain control of our situation. Programmers do it, Scrum Masters do it, Product Owners do it. Everyone is going to do it.

As Testers, that’s where we come in…

I'd like to talk about the Ellen J Langer's theory on the Illusion of Control, and how it impacts us in our day to day lives as software testers. We are in a unique position in the development team, as we seek confidence and understanding of the feature being worked on whilst our stakeholders look to us to base their own confidence and understanding upon. I would like to explore the ways in which it influences our own decisions and behaviours in order to protect ourselves from traps laid by random chance, and also to help spot the occasions where you, the fantastic tester you are, will be causing involuntarily illusions that threaten the rational judgement of your stakeholders.


I want you to take away a set of tools to help you and your colleagues take confidence in those things you can control, and help identify those that you can't. More importantly though, I want you to be able to spot the warning signs that someone is basing their confidence on your ability to control something that you can't - and I want you to know what to do about it

Drew Pontikis
I'm a Practice Manager based in Cambridgeshire UK, and have been working in software testing for the past eight years. I am passionate about quality and the human aspects of testing - how we feel about the software we build impacts how our customers will feel about it. I work and coach with amazing testers who do wonderful things every day to make sure that our customers will enjoy using our products. In my spare time I'm an organiser for the Ministry of Testing Peterborough meetup to help the community in my area grow, a STEM Ambassador working with schools and children to inspire the next generation of technology lovers, and an avid reader - because it makes me happy.

When we talk about Scrum-teams and their interdisciplinary performance, it seems like many of us, have this team of experts in mind, which are theoretically able to fulfill every role and could do every job in the team. Unicorns, dragons, chimeras, wizards,... we call them.
From our point of view- these team members sound a lot like Nessi or the yeti – they may very well exist - it’s just that we have never seen them.

We often have a good mix in our team - some members are more junior than senior, which struggle with the software, the framework, themselves, additional tools,... others have specialties in one thing and only shallow understanding in others – and that is ok!

Currently we are working in a test automation project that struggles with multiple steep learning curves as we tackle understanding the product under test, building up our skillset and learning to function as a team.
Given our peculiar context we’ve experimented applying Vera’s deep understanding of the theory of learning and putting it in practice in a team consisting of real people, real challenges and real learning needs resulting in an honest and valuable experience report.

We’d like to share some insights on how to train people within the project to make the group more homogenous in terms of knowledge sharing and teaching each other. Therefore we ( in the role of the Product Owner and ScrumMaster) include learning and especially learning goals into the backlog and apply the SCRUM methods to them..

In our talk we will show how we included learning goals in our backlog and the effects it had on the team. We will discuss how to find the time to learn in projects and to do a proper debriefing on learning goals.



We like to show how you can adapt roles to serve your team even more and how you can include learning in your team process

Vera Gehlen-Baum

Vera finished her PhD in 'Learning with new media' in 2015 and started as a Requirements Engineer at QualityMinds right after. Her first project was to test a medical software and to improve the whole testing process - starting from the requirements. In this and other ongoing projects, Vera can facilitate several of her passions: combining well-researched learning theories with requirements and testing.

Beren Van Daele

I’m a software tester from Belgium who shapes teams and testers to improve on their work and their understanding of testing. An organizer of BREWT: the Belgian Peer conference & a testing meetup in his hometown: Ghent and speaker at multiple European conferences. Together with the Ministry of Testing I created TestSphere, a card game that gets people thinking & talking about testing.

My dream is to become a test coach for people that nobody believes in anymore or no longer believe in themselves. People that have motivation and will, but no luck. I want to tap into that potential and give them the opportunity to become kick-ass testers.

In this live coding talk I will build an API testing framework in Python. We'll take it step-by-step - starting with the simplest, smallest thing that works. Then we'll identify the biggest impediment of the framework as it is and resolve it. Rinse and repeat a few times and before you know it, you have a powerful API testing framework without even having to write that many lines of code.

During my coding I will be narrating what I'm doing and what I'm thinking. What does this piece of code do? Why do we need it? How does it support my testing? That last question will be the main thread through this talk. How do you build a framework that supports you through all your activities as a tester? And that doesn't get in your way when you want to focus on testing, i.e. on the quality of our product? Because that to me is key: it's only at specific times I'm willing to deal with the tool as a tool, at all other times it should Just Work(TM).

Joep Schuurkes
Joep wandered into testing in 2006. Two years later he discovered context-driven and realized that testing is actually not that bad a fit for a philosopher. In 2015 he joined Mendix, one reason being that he wanted to increase his technical skills. Since then he's become more and more involved in automation, while also taking on the roles of scrum master, then team lead, and as of October 2018 tech lead testing. Somewhere along his journey (some say it was 2013) he started speaking at conferences and joined DEWT. Outside of work his thoughts tend to revolve around coffee, fountain pens and bouldering.

Even though it has now been 17 years since the agile manifesto was written, many teams are still challenged with the transformation. Often the focus is so much on getting the method right; the right ceremonies, the right roles and the right artefacts that the team forget the most important thing – WHY they are taking the agile journey.

The focus is not on building quality in and delivering working software continuously during the iteration – one of the core principles of agile, testing is not an integrated part of the way of working. These agile transformation failures prevent teams from getting the maximum value out of the agile way of working.

Failing to create a mindset and environment where quality is built-in continuously and testing is an integrated part of the development lifecycle are risks not only to quality in classical terms but also to the core agile principle of delivering working software of value to the customer. Gitte will look at some challenges of an agile transition seen through a testers’ eyes and give inspiration to practical initiatives: proactively using the testers toolbox to gain a better understanding of the user’s needs, getting tests integrated in the daily work rather than late in the sprint, and introducing test automation.


  1. Use your testing toolbox to support a better understanding of the users need
  2. Ownership of quality - the team, not the tester
  3. idea on how to get started with automation when having a large existing manuel regression test suite
Gitte Ottosen
24 years in software testing, working primarily with mission critical software for defense and healthcare. Passionate about testing, both in traditional and agile context - focusing on a value driven approach to software development. Working as test manager and test coach, and is dedicated trainer within agile and testing.

If implementing an approach which meant that bugs were caught far earlier, then why would you only dedicate 1 page of 78 in a commentary on best practice? That is what came out from the 2018 State of DevOps report.

We have always leaned towards bombarding armies of testers with multiple releases with the aim of trying to catch bugs...and you know what, it hasn’t always been that successful.

Testers are the eye for the end users - they get it, they use it as the user would! How about seeing what the testers are saying about the performance of the system you are creating? A different perspective brings a lot more than you would think…It’s far more than chasing bugs!

This talk will journey my development as a tester from "click factories”, the first signs of agile, attempting to automate everything to now understanding what continuous testing really is and where the future of the tester may lay.

From this talk you will:

  • Understand how to test ideas before they’re even implemented

  • Gain insight into ways of helping teams understand what quality means  

  • See how testability improves quality

  • Feel motivated to encourage advocating for testing in production 

  • Be confident to share what implementing observability in production means


  • Understand how to test ideas before they’re even implemented

  • Gain insight into ways of helping teams understand what quality means  

  • See how testability improves quality

  • Feel motivated to encourage advocating for testing in production 

  • Be confident to share what implementing observability in production means

Jitesh Gosai
Over the course of the last 15 years as a Test professional I've strived to help the teams I've worked with be the best they can. I've seen first hand what does and doesn't work in improving quality of our products across the software industry. I now want to take these experiences and help others make their teams be the best they can by improving quality through testability.

As a young woman working as a trainee without an IT background, testing a mainframe at a large bank is hard enough. However, if your colleagues are too busy, the subject matter (Cobol) is tough and when you finally get an assignment it is taken away because they can do it faster, it becomes nearly impossible.

Frustrated because she felt ignored and useless because she could not add the value she wanted to, Anne nearly gave up. She didn't feel like an IT specialist at all and without a proper chance to learn she would never become one. Her luck changed when an external consultant was hired for test automation. Anne was assigned to help him understand the testing process, and thus he became her mentor.

The mentor, on the other hand, gained a lot more from his mentee than he anticipated. While Vincent was continuously challenged to understand his own craft better in order to coach Anne, he also got something completely different: a front seat for the challenges that a young woman in IT can struggle with and the frequently bizarre situations she had to deal with.

To call it an eye-opener would be an understatement, and Vincent slowly understood that Anne needed more than a mentor: she needed an ally. So come and listen to both our stories. Discover how learning, frustration, compassion, failure, success and a lot of patience turned us into an awesome team. We believe that our story as both mentor and mentee can inspire you to either get a mentor or become one.


  • Learn how a toxic environment can impede improvement and why a mentor can help.
  • Discover why it is not only important to ask for help, but also to answer a call for it.
  • Experience the struggle of a young woman in a mainframe environment from her own perspective as well as that of her mentor.
  • Getting a mentor or becoming one might just result in a powerful duo that can initiate change in an entire department.
Anne Colder

Anne Colder is a passionate test automation engineer, who entered the world of testing as a molecular life scientist looking for new challenges. However, testing proved challenging for a young woman in a mainframe environment under the pressure of a DevOps transformation. Anne's sharp and studious mind combined with broad social skills allowed her to face these challenges and to grow from testing to test automation engineer. After several projects she currently works at PGGM, a service provider for pension funds, where she not only tests, but brings business and IT together.

Vincent Wijnen

Vincent Wijnen has over 10 years of experience, growing from tester to test automation consultant through numerous different assignments at various clients of Sogeti Netherlands. His boundless enthusiasm for test automation combined with his unbridled passion for teaching, coaching and organizing social events for both Sogeti and clients has earned him Employee of the Year 2017 within Sogeti.

In recent years we've seen a movement to more test automation replacing manual/exploratory testing and even replacing dedicated testers altogether. Many programmers don't like to manually test things. But not everything can (or should) be automated. Some things are hard if not impossible to automate, others are simply not worth the effort. I'll illustrate this with a tale of a new system built by my team where we still found bugs in e2e testing even though we had solid unit and integration test coverage.

Great test automation does not absolve you from manual or exploratory testing. We've seen there's more automated testing than doing manual testing. I've seen people, companies, teams move away from having dedicated testers at all, and I've even heard people say, you know, you should never do any manual testing at all. I have an opinion on that.

Now, I don't like manual testing, especially manual regression testing; I think it's very boring! So, when I got into test automation about five years ago, I was like “automate all the things!” Because automation is fun, you know? I get to write code, listen to Spotify, and get paid to do that; it is awesome! However, I met lots and lots of awesome testers, and I learned that maybe we cannot automate everything. Some things are just too hard, too brittle, it takes too much work; it's more work automating it than it would be to do it manually, especially if you have to maintain those tests (which I have done).

Automation can't tell you what it's like to actually use the thing. To quote my friend Lanette Creamer: "If you don't test it, your users will." And what does that say about how you value your users?

Marit van Dijk
Marit van Dijk has over 15 years of experience in software development in different roles and companies. She loves building awesome software with amazing people, and is an open source contributor to Cucumber. She enjoys learning new things, as well as sharing knowledge on test automation, Cucumber/BDD and software engineering and blogs at https://medium.com/@mlvandijk. Marit is currently employed at bol.com.

Software testing fulfills two often-conflicting purposes: to confirm existing understanding and to seek out new knowledge. Testing includes demonstrating that software can work and seeking out cases in which it may not work as desired.

Searching for new knowledge in software systems is similar to scientific research in the natural world. Both require observation of and experimentation within complex systems in hopes of discovering something new. Both require feeling around in the dark in search of things that may not even be there.

Although feeling around in the dark may appear to be haphazard to outsiders (and even some insiders), it is a cognitively complex activity that requires skill and a prepared mind. Join Ben as he shares principles and techniques from William I.B. Beveridge’s classic book The Art of Scientific Investigation, and applies them to the dark side of testing -- the investigative side.

Learn to see testing in a new light -- a light in the darkness. Take away an appreciation for your most powerful testing tool – your mind.

Ben Simo
Ben Simo, aka QualityFrog, is an amphibious time-traveling context-driven cyborg software investigator. In his nearly 30 years as a professional software tester, Ben has seen technologies and techniques come and go; while one thing remains the same: software is built by, used by, and impacts people. Ben approaches software testing as observational and experimental investigation that enables people to make better decisions that result in better software. Ben currently helps teams build better software at Medidata Solutions. Ben shares wild-caught software problems at IsThereAProblemHere.com.