TestBash Germany 2019

TestBash Germany was back for the third time on 12th - 13th September 2019. It was a fun-filled and jam-packed two days with plenty to learn, much to take home and many opportunities to make new tester friends!

We hosted over 200 software testers from across the globe, had 6 amazing workshops, 8 great talks, lots of awesome 99-second talks and plenty of socials. Here's a video of some highlights from the event

We record all our TestBash talks and make them available on The Dojo. Some are free to watch and others require Pro Membership. Here are all the talks TestBash Germany 2019, get stuck in!

Join the discussion about TestBash Germany over at The Club.

We would like to thank our TestBash Germany 2019 event sponsors Maiborn WolffQualityMinds, EBay Technology, Packlink and XING for supporting this software testing conference and the software testing community.

If you would like to attend TestBash or any of our events then please check our latest schedule on our events pages.

Watch all the talks from the event:
TestBash Germany 2019



Monday, 9th September 2019

This class will be taught in English

What Do We Mean By ‘Automation in Testing’?

Automation in Testing is a new namespace designed by Richard Bradshaw and Mark Winteringham. The use of automation within testing is changing, and in our opinion, existing terminology such as Test Automation is tarnished and no longer fit for purpose. So instead of having lengthy discussions about what Test Automation is, we’ve created our own namespace which provides a holistic experienced view on how you can and should be utilising automation in your testing.

Why You Should Take This Course

Automation is everywhere, it’s popularity and uptake has rocketed in recent years and it’s showing little sign of slowing down. So in order to remain relevant, you need to know how to code, right? No. While knowing how to code is a great tool in your toolbelt, there is far more to automation than writing code.

Automation doesn’t tell you:

  • what tests you should create
  • what data your tests require
  • what layer in your application you should write them at
  • what language or framework to use
  • if your testability is good enough
  • if it’s helping you solve your testing problems

It’s down to you to answer those questions and make those decisions. Answering those questions is significantly harder than writing the code. Yet our industry is pushing people straight into code and bypassing the theory. We hope to address that with this course by focusing on the theory that will give you a foundation of knowledge to master automation.

This is an intensive three-day course where we are going to use our sample product and go on an automation journey. This product already has some automated tests, it already has some tools designed to help test it. Throughout the three days we are going explore the tests, why those tests exist, our decision behind the tools we chose to implement them in, why that design and why those assertions. Then there are tools, we'll show you how to expand your thinking and strategy beyond automated tests to identify tools that can support other testing activities. As a group, we will then add more automation to the project exploring the why, where, when, who, what and how of each piece we add.

What You Will Learn On This Course

To maximise our face to face time, we’ve created some online content to set the foundation for the class, allowing us to hit the ground running with some example scenarios.

After completing the online courses attendees will be able to:

  • Describe and explain some key concepts/terminology associated with programming
  • Interpret and explain real code examples
  • Design pseudocode for a potential automated test
  • Develop a basic understanding of programming languages relevant to the AiT course
  • Explain the basic functionality of a test framework

Day One
The first half of day one is all about the current state of automation, why AiT is important and discussing all the skills required to succeed with automation in the context of testing.

The second half of the day will be spent exploring our test product along with all its automation and openly discussing our choices. Reversing the decisions we’ve made to understand why we implemented those tests and built those tools.

By the end of day one, attendees will be able to:

  • Survey and dissect the current state of automation usage in the industry
  • Compare their companies usage of automation to other attendees
  • Describe the principles of Automation in Testing
  • Describe the difference between checking and testing
  • Recognize and elaborate on all the skills required to succeed with automation
  • Model the ideal automation specialist
  • Dissect existing automated checks to determine their purpose and intentions
  • Show the value of automated checking

Day Two
The first half of day two will continue with our focus on automated checking. We are going to explore what it takes to design and implement reliable focused automated checks. We’ll do this at many interfaces of the applications.

The second half of the day focuses on the techniques and skills a toolsmith employs. Building tools to support all types of testing is at the heart of AiT. We’re going to explore how to spot opportunities for tools, and how the skills required to build tools are nearly identical to building automated checks.

By the end of day two, attendees will be able to:

  • Differentiate between human testing and an automated check, and teach it to others
  • Describe the anatomy of an automated check
  • Be able to model an application to determine the best interface to create an automated check at
  • How to discover new libraries and frameworks to assists us with our automated checking
  • Implement automated checks at the API, JavaScript, UI and Visual interface
  • Discover opportunities to design automation to assist testing
  • An appreciation that techniques and tools like CI, virtualisation, stubbing, data management, state management, bash scripts and more are within reach of all testers
  • Propose potential tools for their current testing contexts

Day Three
We’ll start day three by concluding our exploration of toolsmithing. Creating some new tools for the test app and discussing the potential for tools in the attendee's companies. The middle part of day three will be spent talking about how to talk about automation.

It’s commonly said that testers aren’t very good at talking about testing, well the same is true about automation. We need to change this.

By the end of day three, attendees will be able to:

  • Justify the need for tooling beyond automated checks, and convince others
  • Design and implement some custom tools
  • Debate the use of automation in modern testing
  • Devise and coherently explain an AIT strategy

What You Will Need To Bring

Please bring a laptop, OS X, Linux or Windows with all the prerequisites installed that will be sent to you.

Is This Course For You?

Are you currently working in automation?
If yes, we believe this course will provide you with numerous new ways to think and talk about automation, allowing you to maximise your skills in the workplace.
If no, this course will show you that the majority of skill in automation is about risk identification, strategy and test design, and you can add a lot of value to automation efforts within testing.

I don’t have any programming skills, should I attend?
Yes. The online courses will be made available several months before the class, allowing you to establish a foundation ready for the face to face class. Then full support will be available from us and other attendees during the class.

I don’t work in the web space, should I attend?
The majority of the tooling we will use and demo is web-based, however, AiT is a mindset, so we believe you will benefit from attending the class and learning a theory to apply to any product/language.

I’m a manager who is interested in strategy but not programming, should I attend?
Yes, one of core drivers to educate others in identifying and strategizing problems before automating them. We will offer techniques and teach you skills to become better at analysing your context and using that information to build a plan towards successful automation.

What languages and tools will we be using?
The current setup is using Java and JS. Importantly though, we focus more on the thinking then the implementation, so while we’ll be reading and writing code, the languages are just a vehicle for the context of the class.

Richard Bradshaw
Richard Bradshaw is an experienced tester, consultant and generally a friendly guy. He shares his passion for testing through consulting, training and giving presentation on a variety of topics related to testing. He is a fan of automation that supports testing. With over 10 years testing experience, he has a lot of insights into the world of testing and software development. Richard is a very active member of the testing community, and is currently the FriendlyBoss at The Ministry of Testing. Richard blogs at thefriendlytester.co.uk and tweets as @FriendlyTester. He is also the creator of the YouTube channel, Whiteboard Testing.
Mark Winteringham

I am a tester, coach, mentor, teacher and international speaker, presenting workshops and talks on technical testing techniques. I’ve worked on award winning projects across a wide variety of technology sectors ranging from broadcast, digital, financial and public sector working with various Web, mobile and desktop technologies.

I’m an expert in technical testing and test automation and a passionate advocate of risk-based automation and automation in testing practices which I regularly blog about at mwtestconsultancy.co.uk and the co-founder of the Software Testing Clinic. in London, a regular workshop for new and junior testers to receive free mentoring and lessons in software testing. I also have a keen interest in various technologies, developing new apps and Internet of thing devices regularly. You can get in touch with me on twitter: @2bittester

Thursday, 12th September 2019

Morning Sessions | 9:00am - 12:30pm

There is so much to learn about testing that sometimes it can be overwhelming to me. Whether it's the latest book, podcast, conference session or workshop, information about testing is expanding at an ever increasing rate. Eventually I find myself bogged down with an excess of information and not enough time to put it into practice.

So I started looking into Test Games. These are games either designed to apply testing concepts or ones that by happy coincidence are perfect examples of test mentality. I've enjoyed them very much in the last two years and am working on creating one of my own this year as a hobby.

I want to walk participants through the brainstorm, design, testing and ultimate development of a test game, and show them how they can take their ideas and create a fun learning tool of their own. In this workshop we'll play a few testing games, run some of our own tests on them, and go through all the steps to rapidly bring their ideas to life as a game.

Participants do not need to bring special equipment but bringing an idea or concept you want to develop into a game is highly recommended.


1) Testing Games are fun and educational!

2) It doesn't take too much effort to create a testing game

3) Using games helps explain difficult concepts to teammates and other figures of an organization. 


Rick is an avid Test Philosopher, always up for a good debate, discussion or exploration of the many facets of Testing and Software Development in general. He works at Rabobank WRR Finance in the Netherlands and has done development, testing, requirements analysis, Agile scrummastering and test coordination there for 5 years. When not testing, discussing, or listening at conferences and events, Rick enjoys writing his (one day to be published!) novel, sword fighting and cuddling his outrageously adorable cats.

Security testing seems to be viewed as an extremely complicated area where only experts can contribute. In this workshop, we’ll demonstrate that in truth, there’s plenty of things you can do being an expert in security testing.

We have worked in a number of teams where security testing was seen as something you buy as a service from an external vendor and then you try to make sense of the report and hopefully you figure out what to change. After reading a number of those reports, we realized that not only did the same issues keep coming back; they were also things we should be able to check for ourselves on a regular basis instead of paying top dollars for someone else to do it once every year. By introducing just a few new checks into the regular testing of most web applications, we can gain confidence in ourselves and the security of our systems. Bringing in a security expert is of course still valuable, but now we can let them focus on the trickier stuff.

The OWASP Juice Shop is an intentionally insecure web application, as an exercise and training environment for quality engineers and developers of all skill levels. In this workshop, we will use it as our lab environment as we go over the current OWASP Top 10 list of web application risks. We’ll guide you through some handy tricks and tools for solving some of the Juice Shop challenges and reflect on how this can be used in your everyday situations. The focus will be on “low-hanging fruit”, i.e. things that can be done quickly and are easily applied regardless of situation. Hopefully this will leave you with a lot of new ideas, a hunger for learning more and an itch to solve all the challenges of the Juice Shop!

The format will be a Capture the flag-event where you will be trying out the practices as we go through them.


  • Things you can introduce into you regular testing process today
  • Introduction to some security testing tools
  • You don’t have to be an expert to start!
  • Ideas on how to delve deeper once you get comfortable with the basics
Lena Wiberg
Lena has been in the IT-industry since 1999 when she got her first job as a developer. Testing and requirements have always been a part of her job but in 2009 she decided to take the step into testing full-time and she has never looked back since. Lena has worked as a single tester, test lead, test manager, senior test manager, test strategist and manager. She is also involved with the software testing education in Sweden, both as chairman for one of the schools and by mentoring interns to give them the best internship possible. Lena lives in a big house filled with gaming stuff, books, sewing machines and fabric. Gaming is a big thing for everyone in the family and something she loves talking about. Biggest achievement: the Dance Dance Revolution-machine taking up half of her living room-space.

Recently I scheduled a Test Automation knowledge sharing session with new joining developers from the company I work for and while preparing the materials for that session I was saying to myself: “You must think of something catchy and engaging for this session and onboard those developers with excitement, joy, curiosity and fun”. So I’ve started thinking of different ways to do it and quickly a flow of ideas burst into my mind, including the cards game presented below. Also, a common line heard in the past came to my mind: “We will just start developing some tests with Selenium, how hard can it be?”. We all know that it is not about being easy or hard, but about a bit of structure and mainly about considering the context where test automation will be placed. For example: microservices architecture or monolith, strong testing culture or “testing is testers responsibility”. When it comes to test automation one size does not fit all and we will confirm this during the workshop.

Goal: During the workshop participants will design, debate and pitch a test automation strategy using a cards game which will facilitate collaboration and knowledge sharing in a safe environment and hopefully the courage to challenge the test automation status quos.

Participants: I recommend the workshop to different roles and different levels of experience as there will be strong knowledge sharing and collaboration between participants. Test Automation Engineers, QA Engineers, Managers and Developers are all welcomed.


Participants will take away:

  • test automation solutions
  • how to approach different test automation challenges
  • how to design or kick-off a test automation strategy
Challenging the status quo! This is how I learn, bring value and annoy others. I started my career in Software Development 7 years ago and during this time I tried different roles and assignments, as Quality Assurance Engineer, Software Analyst, Scrum Master, Head of Product Quality in areas like banking, eprocurement, trust economy. 2 years ago, I started a beautiful journey with Trustpilot - a very well known and respected startup in the Nordics, acting in Trust economy. At Trustpilot, together with my team and all Engineers we challenge the status quo in software quality approaches. A few keywords that describe me: sports, croissants, cheese, good coffee, competitive, positive.
Afternoon Sessions | 1:30pm - 5:30pm

More and more, testers are using exploratory testing and context driven testing techniques in their organizations. However, as many testers start to embrace these testing methodologies they are uncovering questions in their implementation.

In this half-day workshop, we will explore the various aspects of testing including test planning, test design, test execution, and test reporting from the exploratory testing mindset. We will also cover how to prepare your organization for the shift from more traditional methods to exploratory testing methods. Testers who attend this session will leave understanding how to implement exploratory testing concepts through all the phases of test planning, design, execution and reporting and feel confident returning to their organizations to implement their changes.


Learning Objectives:

  • How the various aspects of the testing cycle (Test Planning, Design, Execution and Reporting) are different in Exploratory Testing than they are in traditional testing.
  • How to implement Exploratory Testing concepts through the various aspects of the testing cycle.
  • How to prepare their organizations for the shift from more traditional testing to Exploratory Testing.
  • How to deal with resistance to Exploratory Testing techniques.
Nancy Kelln
A passionate Context Driven Test Manager with 16 years of diverse IT experience, Nancy enjoys working with teams that are implementing or enhancing their testing practices and provides adaptive testing approaches to exploratory, context driven, and traditional testing teams. She has coached test teams in various environments and facilitated numerous local and international workshops and presentations. From small scale to multi-million dollar projects; Nancy has played many roles within testing including Project Test Manager, Test Manager, Test Lead and Tester. Her most recent work has been exclusively with Context Driven Testing implementations at large scale companies. A co-founder of POST, Nancy is an active member of the Calgary Software Quality Discussion Group, Association for Software Testing, and the Software Test Professionals organization. Nancy and her family live in Airdrie, Alberta, Canada. Connect with Nancy on Twitter @nkelln.

Nowadays we build applications via the microservice principles to make our applications easier to maintain, deploy, test and change. These microservices can easily be deployed on cloud platforms. Multiple microservices together form one application. But is that application resilient? What happens if one of the microservices fails? What happens if one microservice gets slower? 

So a resilient service: is a stable & reliable service, has high availability, and does not compromise the integrity of the service or the consistency of the data. But how to test this? 

That is what we will do during this workshop. Together with you we will test the resilience of an cloud application by creating chaos in the form of failures & disruptions, to see what happens to our application.

During this workshop we will tell you more about:

What is Resilience and how you test it;Microservices & Cloud platform;How to perform a load test;How to create chaos manually;How to create chaos automatically; 


Main statement: Resilience, Stress & Performance test your cloud environment!

Key learning 1: What is Resilience testing

Key learning 2: Executing your own Performance/Stress Tests

Key learning 3: Executing your own Resilience Tests

Key learning 4: Automated Resilience testing with Chaos Monkey

Mark Abrahams
Mark works with Geoffrey as a thought leader for Ordina Auto|Q. Mark has a focus on new technical innovations and test automation.
Geoffrey van der Tas
Geoffrey works as though leader for Ordina Auto|Q. Geoffrey has focus on more the agile side of testing and exploratory side of testing. Together Geoffrey and Mark worked on this workshop to combine both their skill sets.

In this workshop, we will look at how Developer Tools can support your testing. We will look at different browsers and the tools that are available in these browsers.

You will work on hands-on tasks to experience the features offered by Developer Tools and how using them can benefit you as a tester.



  • Understand what Developer Tools offer and how to use them as a tester
  • Learn how Developer Tools can give you information about an application
  • Learn how to make changes in the browser to experiment or fix bugs
  • Uncover risks by using Developer Tools
  • Learn how to simulate mobile devices and networks

Attendees should bring a laptop.

Jan Eumann

Jan works as the Test Engineering Lead/Manager at eBay DE in Berlin, Germany. He has been in the software industry for almost 15 years working in different roles in the software development process. Jan started as a developer and quickly learned to appreciate skilled testers. In the last years, he worked as a test engineer looking into exploratory testing and how automation can support testing. He is working in an agile team performing testing tasks while also writing production code and educating the team and himself about testing.

All Day Sessions | 9:00am - 5:30pm

We’re seeing it as an initiative to get people talking more, and perhaps go a bit deeper on some topics. Those topics could be anything, even what you may have heard at the conference. By deeper, we mean many things, such as discussions and debates. Plus more hands-on things such as tool demos, coding and some actual testing. It could be anything.

So the TestBash Brighton open space will essentially take the form of an unconference. There will be no schedule. Instead we, and I really do mean we, all attendees, will create the schedule in the morning. Everyone will have the ability to propose a session, in doing so though, you take ownership of facilitating the said session. Once everyone has pitched their session ideas, we will bring them all together on a big planner and create our very own conference. Depending on the number of attendees we expect to have 5-6 tracks, so lots of variety.

Open Space is the only process that focuses on expanding time and space for the force of self-organisation to do its thing. Although one can’t predict specific outcomes, it’s always highly productive for whatever issue people want to attend to. Some of the inspiring side effects that are regularly noted are laughter, hard work which feels like play, surprising results and fascinating new questions. - Michael M Pannwitz

It really is a fantastic format, it truly allows you get to answers to the problems you are really facing, whereas with conference talks you are always trying to align the speaker's views/ideas to your context, with this format you get to bring your context to the forefront.

Richard Bradshaw
Richard Bradshaw is an experienced tester, consultant and generally a friendly guy. He shares his passion for testing through consulting, training and giving presentation on a variety of topics related to testing. He is a fan of automation that supports testing. With over 10 years testing experience, he has a lot of insights into the world of testing and software development. Richard is a very active member of the testing community, and is currently the FriendlyBoss at The Ministry of Testing. Richard blogs at thefriendlytester.co.uk and tweets as @FriendlyTester. He is also the creator of the YouTube channel, Whiteboard Testing.

Friday, 13th September 2019

Many good product and project teams try to understand different things when they’re asked to build a new feature. But how do we know whether our reasons are actually good enough? Are we tracking user's activity and feedback about the feature/product? Do we have analytics in place for all our features? How are we validating that key/value is set correctly for various events in analytics while feature development? How about automating these processes with day-to-day testing?

In this talk, I will share how we did our analysis in order to get better insights for building the right product and adding more value to our customers. I will share my experiences, challenges, etc.

I will cover how we automated analytics and started getting more value out of it after its integration with functional tests and CI. Will also talk about a utility (Sentiment Analyser) we developed which helps us to generate meaningful reports from user’s feedback on daily basis and figuring out issues which we introduced in production with last release.

These two utilities will be in action :)


  1. Bringing a perspective to testing mindset to value analytics and consumer feedback
  2. Understanding the benefits of analytics and testing them
  3. Understanding the value of customer feedback. Figuring out bugs and crashes from feedback and fixing it back quickly
  4. Improve the product from consumers feedback
Rohit Singhal
I am a value-oriented and hands-on Software Engineer with more than 5 years of experience in Quality Assurance Domain. I always look forward to bring up new ideas which can make me and my team more effective and productive. I believe in giving back to the community by sharing and open sourcing my ideas.

Agile transformation is challenging, especially for testers who are accustomed to working in a silo. As a lone tester on a team, it’s difficult to find the time to learn new methodologies and adapt to be able to incorporate them as part of your process.

While it’s apparent that an agile approach can bring forth several benefits such as the faster delivery of new features, it’s not always obvious what a tester should do to be a part of this change.

Although there’s literature covering the tester’s role in an agile environment, this talk will move beyond the theory and provide a journey of a tester’s experience in making this transformation.

Make no mistake... transitioning to agile is not an easy feat, so this talk will not only cover my personal successes but will also highlight my failures and critical lessons learned.


  • How to overcome the challenges faced during transitioning to agile

  • Tips on how to approach and plan in sprint test activities

  • How to utilize techniques such as exploratory testing to achieve more in short sprint cycle


I'm a Senior Test Consultant at Scott Logic. I am very passionate about testing and very keen in learning new things so I can use it in testing and deliver better quality. I'm always interested to share my testing lessons learned or experiences in the form of stories and also like brainstorming and bouncing ideas by having deep diving conversations.I have gone from being a solo test advocate to building up a team of 4 testers. I have been part of few transitions like - waterfall to agile, agile to devops, from testing on monolith to microservices architecture. Apart from work, I'm super mom of two lovely kids .

I work for a large Fin Tech company in which the higher ups base their lives on scorecards and eye grabbing headlines. "The Travel Team now have 100% Automation and have completed their implementation of DevOps!" or "We are currently at 75% automation but we hope to have 100% across all applications by the end of Q2!" are typical phrases you'll hear, but is this culture counter productive when it comes to testing?

We all know that automation is a massive part of what we do and for the most part it greatly enhances our test coverage, reduces workload and risk and increases confidence in our software. However, there are times when automation isn't required to achieve the greatest test efficiency and we find ourselves implementing it simply because we've been told to.

For example, your director wants to tell his boss that he has 100% automation coverage on all applications, but what is the point in spending 2 weeks creating a regression suite for an application that is updated twice a year on average? Perhaps rather than creating a large Selenium script base a different approach might actually be more effective, such as Exploratory testing?

This talk will look at the scenarios in which Automation might not be your best option and how you can win over a leadership group typically focused on metrics and milestones.


  • How to assess if automation is the best answer to your test problems
  • How to fight back on leadership when they are obssessed with buzzwords and undesirable metrics
  • Different approaches to the standard regression methods
My names Jack and I am a 29 year old from London/Brighton currently working as a Senior Quality Engineer in Financial Technologies. I studied Multimedia and Digital Systems at Sussex University and after graduating in 2011 I began my career in tech whilst continuing my studies into postgrad. It took me a while to find where I wanted to go but once I fell onto the testing path I knew I had found the right track for me. I love engaging with the test community via any medium, whether it be Twitter, the Software Test Clinic, the MoT or the Testers Network (a large 40+ testers community of practice at work that I run). I am always keen to learn and to develop my understanding of our great craft so please drop me a tweet anytime!

I noticed, that most of companies have no UX designers and products are just created by developers without any clear idea of how it should look and how it should work. I want to remind, that quality is not only accurate numbers in tables or correctly filled forms. Quality is also overall look and feel of the product, so QA should also work on making product look and feel good (or at least better).

I will ask my audience some question e.g.:

  • How many of you know what is UX? And will give a definition of UX and also explain the difference between UI and UX.
  • How many of you have UX designer on board?

Firstly i would like to tell my story of how I used to work with UX designer and how i handled my work while having no UX designer in the company at all. I will tell some tricks on how to convince your developers and team leads to start changing the UX and how I managed to get a UX designed on board, including:

  • Learn about good UI/UX;
  • Start writing UI/UX related bugs;
  • Do some usability labs;
  • Use paint (crop and drag!);
  • Question new features;
  • UI/UX hall of shame/fame;
  • Stop talking about business to business applications, we all want to use "Facebook" on our daily basis!
  • Don't get used to bad UX!
  • Grow little UX designer in every developer!
  • Don't stop, even if you have UX designer, as you are the one who uses the product every day!

Direct UX impact for your product:

  • For apps UX usually impacts rating in app store;
  • For webpages UX can impact the sales of products on website, likability to use your web against concurrent product and more;
  • For desktop application it might take a lot of time to train your users if UX is not intuitive and users are more likely to choose competitors product against yours.

All in all I am very passionate about UX, so I can talk about it a lot :)


  1. UX is important and can affect your product ratings in app store and overall user satisfaction.
  2. If you have no dedicated UX designer, then QA should also cover this role.
  3. Ways, how to drag your colleagues in to UX, so it would not be only your wish, but all teams purpose and target.
  4. What you can do to make your product better on a UX side.
I am an experienced manual tester, test automation specialist and I just started a new project as UX designer in Infare. I have a bioinformatics bachelor degree and since I found a mistake in university task I have been offered a job in an international company as a tester. I have been working in a QA field for past 9 years and have experience in WEB, mobile, desktop and server testing. The most important part of the product for me is an overall look and feel and i would like to share this message with you!

Most people working in software development have already heard of user personas. They might also be familiar with the seven dwarfs from Disney’s classic Snow White fairy tale. But is there some way that we can use personas and the seven dwarfs together, to help us build better software? Might it even be possible to use them to understand how inclusive, or exclusive, or our products are?

In this talk, Cassandra will provide a fresh take on the seven dwarfs and how to use personas. She’ll use the dwarfs as a starting point to demonstrate how you can create user personas that start with users’ mental states, rather than traditional demographics like age or income. We’ll then work outwards to discuss some situational factors and user goals, inspired by real life experiences. Along the way, we’ll practise empathy by exploring the impact that exclusion in software can have on the real people represented by our personas.

With the help of dwarfs, Grumpy, Happy, Sleepy, Bashful, Sneezy, Dopey, and Doc, watch this talk to learn::

  • How we can use personas not only to picture who uses our products, but who our products might exclude
  • How we can, and should, think about users' mental states when building software
  • How situational factors mean that a single user can’t be represented by just one persona
Cassandra H. Leung

Cassandra describes herself as a tester and UX enthusiast, and is currently working for MaibornWolff in Germany. With previous roles including product owner, business analyst, recruiter and international account manager, she uses her varied knowledge and experience to help her in testing.

Cassandra often shares thoughts about testing and software production on her blog and on Twitter. She is very passionate about diversity and inclusion, and tries to raise awareness of various social issues relevant to technology, with a new blog series, “Identity Stories”, launching in 2019. Cassandra has spoken at various conferences around the world and hopes to inspire others to share their stories too.

Traditional quantitative software testing metrics can lie. They can give us a false sense of security and allow us to make bad decisions about our software under test. On software projects where the quality of the software built is high or when we have an infinite amount of time to test, we can survive the distraction of quantitative metrics. However, on project where the quality of the software built is low or there are time pressures, reporting only quantitative metrics can cause more problems. If you have high risk systems, with lots of issues and little time to test, you need to consider Qualitative Risk-Based Test Reporting.

In this talk, Nancy Kelln will deliver our test reports from the evil lies quantitative metrics can tell, by introducing Risk-Based Qualitative methods for test documentation. Her approach has been successfully implemented on both large and small scale projects in various organizations. This approach can also be applied to Agile, Traditional and Exploratory Context-Driven test approaches.


Learning Objectives:

  • Why making decisions with only quantitative metrics is dangerous.
  • The power of qualitative metrics in testing.
  • How to determine what qualitative metrics matter.
  • How to incorporate qualitative metrics in your test reporting, including templates and examples.
Nancy Kelln
A passionate Context Driven Test Manager with 16 years of diverse IT experience, Nancy enjoys working with teams that are implementing or enhancing their testing practices and provides adaptive testing approaches to exploratory, context driven, and traditional testing teams. She has coached test teams in various environments and facilitated numerous local and international workshops and presentations. From small scale to multi-million dollar projects; Nancy has played many roles within testing including Project Test Manager, Test Manager, Test Lead and Tester. Her most recent work has been exclusively with Context Driven Testing implementations at large scale companies. A co-founder of POST, Nancy is an active member of the Calgary Software Quality Discussion Group, Association for Software Testing, and the Software Test Professionals organization. Nancy and her family live in Airdrie, Alberta, Canada. Connect with Nancy on Twitter @nkelln.

Did you ever abandon a web page that took too much time to load? You are not alone. One in four people will abandon a page if it takes more than 4 seconds to load. Moreover, each additional 0.1 seconds will reduce sales by 1 percent!. This will, of course, have a huge impact on your business model.

A common risk mitigation strategy for this problem is to do performance testing of your application. So, what do you envision when you hear the term performance testing? You probably think about throwing hundreds or even thousands of emulated virtual users against your system and keep ramping them up until it finally crashes. But haven't you felt that there is something wrong, not only in your system but in the process of how you approach the problem? You start with attacking the server with Jmeter and end up with some messy graphs that give much value to you or to your team or stakeholders. Then you leave the script, declare performance testing done and report results. Don’t you think that there is something missing? Probably the process of doing performance testing was not straight, consistent or clear on its objectives? At least that was true for me.

So, I will present to you a 7-step methodology for doing performance testing of your web application which helped me to do it in a coherent and structured way, erasing a lot of potential pain. One of the problems I am still struggling with is the time for implementation. Each time I seem to spend significantly more time that was initially estimated.

My methodology starts with identifying the test environment and ends with analysing and reporting your results. In between, there are more activities, like figuring out goal speed, who should define it and how to design and implement the tests. At the end, we will visit some biases and myths surrounding performance testing.


  • core activities to implement performance testing methodology
  • myth surrounding performance testing
Professional software test consultant and Test lead/analyst with a talent in finding bugs and more than 8 years’ experience in delivering quality. Numerous times I've implemented test processes including manual and automation on different types of projects. Managing test speciality track group of 20+ peoples for test automation and leading several tests automation learning courses (fundamentals of programming, testing, and test automation). Certified with various certifications, including ISTQB Advanced level.

After spending too many years in glass buildings looking at the world through a 15-inch display, getting promoted and being an exemplary employee, but not really enjoying it, I decided I had enough of the office life. My turning point was a Google search for “remote software testing jobs”. Two weeks later I quit my job.

In the years that followed since then, I’ve travelled to dozens of places, coded on top of mountains, attended meetings from beaches, ran daily stand-ups in bars and held project brainstormings on yachts. I also take part in long-term volunteer projects outside of the software industry. 

Rebooting your career and your life in this manner takes courage. It’s not something you do on a whim, but it’s worthwhile if you do it the right way. I’ve seen people succeeding and thriving or failing miserably. 

This session will teach you what you need to know and do before venturing freelance. It will describe what you can expect when reality crashes in, a few days after quitting your cosy job and how not to go crazy. It will tell you how online job interviews go with strangers you don’t even see their faces but radiate greatness through their voices alone. It will teach you how to bond with team-mates you only meet through a screen and tie such good relationships that they step up for you when you almost get fired. The talk will also give hints on how to handle legal stuff, accounting and productivity tools.


  • you will know how to evaluate yourself if you’re ready to go freelance
  • you will understand the risks and benefits of freelancing
  • you will know what type of remote jobs to be looking for
  • you will learn how to handle the transition to the freelancing life
  • you will learn how to manage your work to be productive while working from anywhere
Ciprian Balea
Certified Scrum Master, proficient in Scrum and other Agile practices. Automation is what I do best, regardless of project nature: desktop, web or mobile. Experienced in multiple programming languages, as well as various CI and automation frameworks. I've managed teams of collocated and distributed testers oversaw and shaped testing processes for small companies and large corporations. A fitness enthusiast, mountain biker, snowboarder, hiker, certified scuba diver.