TestBash Manchester 2019

October 2nd 2019 - October 3rd 2019

TestBash is coming! Our software testing conference is heading back to Manchester, UK in 2019 with a whole 6 days of software testing awesomeness taking place at The Lowry Theatre.

We begin with the three-day 'Automation in Testing' course by Richard Bradshaw and Mark Winteringham starting on Monday 30th September, and making its debut is a new hands-on two-day course by Bill Matthews titled 'Getting started with Machine Learning'.

We then have a workshop day on Wednesday 2nd October with six fantastic half-day workshops to chose from.

Thursday 3rd October is TestBash, our software conference. A single track event consisting of nine talks on all things testing and working in software development.

Then on Friday 4th October, it's the return of; Test.bash();. A software testing conference solely focused on automation and technical testing topics.

This 6-day TestBash extravaganza is nicely wrapping up with an Open Space event on Saturday 5th October. The Open Space gives you a fantastic opportunity to create your own schedule and to talk and learn about topics that interest you.

On all the days, you can expect to find a wonderful community coming together in a friendly, professional and safe environment. We think you'll feel right at home as soon as you arrive!

TestBash Manchester is expected to sell out for the fourth year running, be sure to get your tickets soon with conference day tickets starting at £349 inc VAT and the workshops from £399 inc VAT.

Event Sponsors:
Training
Monday, 30th September 2019:

What Do We Mean By ‘Automation in Testing’?

Automation in Testing is a new namespace designed by Richard Bradshaw and Mark Winteringham. The use of automation within testing is changing, and in our opinion, existing terminology such as Test Automation is tarnished and no longer fit for purpose. So instead of having lengthy discussions about what Test Automation is, we’ve created our own namespace which provides a holistic experienced view on how you can and should be utilising automation in your testing.

Why You Should Take This Course

Automation is everywhere, it’s popularity and uptake has rocketed in recent years and it’s showing little sign of slowing down. So in order to remain relevant, you need to know how to code, right? No. While knowing how to code is a great tool in your toolbelt, there is far more to automation than writing code.

Automation doesn’t tell you:

  • what tests you should create
  • what data your tests require
  • what layer in your application you should write them at
  • what language or framework to use
  • if your testability is good enough
  • if it’s helping you solve your testing problems

It’s down to you to answer those questions and make those decisions. Answering those questions is significantly harder than writing the code. Yet our industry is pushing people straight into code and bypassing the theory. We hope to address that with this course by focusing on the theory that will give you a foundation of knowledge to master automation.

This is an intensive three-day course where we are going to use our sample product and go on an automation journey. This product already has some automated tests, it already has some tools designed to help test it. Throughout the three days we are going explore the tests, why those tests exist, our decision behind the tools we chose to implement them in, why that design and why those assertions. Then there are tools, we'll show you how to expand your thinking and strategy beyond automated tests to identify tools that can support other testing activities. As a group, we will then add more automation to the project exploring the why, where, when, who, what and how of each piece we add.

What You Will Learn On This Course

Online
To maximise our face to face time, we’ve created some online content to set the foundation for the class, allowing us to hit the ground running with some example scenarios.

After completing the online courses attendees will be able to:

  • Describe and explain some key concepts/terminology associated with programming
  • Interpret and explain real code examples
  • Design pseudocode for a potential automated test
  • Develop a basic understanding of programming languages relevant to the AiT course
  • Explain the basic functionality of a test framework

Day One
The first half of day one is all about the current state of automation, why AiT is important and discussing all the skills required to succeed with automation in the context of testing.

The second half of the day will be spent exploring our test product along with all its automation and openly discussing our choices. Reversing the decisions we’ve made to understand why we implemented those tests and built those tools.

By the end of day one, attendees will be able to:

  • Survey and dissect the current state of automation usage in the industry
  • Compare their companies usage of automation to other attendees
  • Describe the principles of Automation in Testing
  • Describe the difference between checking and testing
  • Recognize and elaborate on all the skills required to succeed with automation
  • Model the ideal automation specialist
  • Dissect existing automated checks to determine their purpose and intentions
  • Show the value of automated checking

Day Two
The first half of day two will continue with our focus on automated checking. We are going to explore what it takes to design and implement reliable focused automated checks. We’ll do this at many interfaces of the applications.

The second half of the day focuses on the techniques and skills a toolsmith employs. Building tools to support all types of testing is at the heart of AiT. We’re going to explore how to spot opportunities for tools, and how the skills required to build tools are nearly identical to building automated checks.

By the end of day two, attendees will be able to:

  • Differentiate between human testing and an automated check, and teach it to others
  • Describe the anatomy of an automated check
  • Be able to model an application to determine the best interface to create an automated check at
  • How to discover new libraries and frameworks to assists us with our automated checking
  • Implement automated checks at the API, JavaScript, UI and Visual interface
  • Discover opportunities to design automation to assist testing
  • An appreciation that techniques and tools like CI, virtualisation, stubbing, data management, state management, bash scripts and more are within reach of all testers
  • Propose potential tools for their current testing contexts

Day Three
We’ll start day three by concluding our exploration of toolsmithing. Creating some new tools for the test app and discussing the potential for tools in the attendee's companies. The middle part of day three will be spent talking about how to talk about automation.

It’s commonly said that testers aren’t very good at talking about testing, well the same is true about automation. We need to change this.

By the end of day three, attendees will be able to:

  • Justify the need for tooling beyond automated checks, and convince others
  • Design and implement some custom tools
  • Debate the use of automation in modern testing
  • Devise and coherently explain an AIT strategy

What You Will Need To Bring

Please bring a laptop, OS X, Linux or Windows with all the prerequisites installed that will be sent to you.

Is This Course For You?

Are you currently working in automation?
If yes, we believe this course will provide you with numerous new ways to think and talk about automation, allowing you to maximise your skills in the workplace.
If no, this course will show you that the majority of skill in automation is about risk identification, strategy and test design, and you can add a lot of value to automation efforts within testing.

I don’t have any programming skills, should I attend?
Yes. The online courses will be made available several months before the class, allowing you to establish a foundation ready for the face to face class. Then full support will be available from us and other attendees during the class.

I don’t work in the web space, should I attend?
The majority of the tooling we will use and demo is web-based, however, AiT is a mindset, so we believe you will benefit from attending the class and learning a theory to apply to any product/language.

I’m a manager who is interested in strategy but not programming, should I attend?
Yes, one of core drivers to educate others in identifying and strategizing problems before automating them. We will offer techniques and teach you skills to become better at analysing your context and using that information to build a plan towards successful automation.

What languages and tools will we be using?
The current setup is using Java and JS. Importantly though, we focus more on the thinking then the implementation, so while we’ll be reading and writing code, the languages are just a vehicle for the context of the class.

Mark Winteringham

I am a tester, coach, mentor, teacher and international speaker, presenting workshops and talks on technical testing techniques. I’ve worked on award winning projects across a wide variety of technology sectors ranging from broadcast, digital, financial and public sector working with various Web, mobile and desktop technologies.

I’m an expert in technical testing and test automation and a passionate advocate of risk-based automation and automation in testing practices which I regularly blog about at mwtestconsultancy.co.uk and the co-founder of the Software Testing Clinic. in London, a regular workshop for new and junior testers to receive free mentoring and lessons in software testing. I also have a keen interest in various technologies, developing new apps and Internet of thing devices regularly. You can get in touch with me on twitter: @2bittester


Richard Bradshaw
Richard Bradshaw is an experienced tester, consultant and generally a friendly guy. He shares his passion for testing through consulting, training and giving presentation on a variety of topics related to testing. He is a fan of automation that supports testing. With over 10 years testing experience, he has a lot of insights into the world of testing and software development. Richard is a very active member of the testing community, and is currently the FriendlyBoss at The Ministry of Testing. Richard blogs at thefriendlytester.co.uk and tweets as @FriendlyTester. He is also the creator of the YouTube channel, Whiteboard Testing.

AI, and in particular Machine Learning (ML), is hot right now and is disrupting many industries through technologies such as simple AI-based APIs to Bots, Machine Learning and Deep Neural Network software. In recent years, AI and ML has started to disrupt software development and testing, providing newer ways of working and problem some organisations with a competitive advantage.

In this 2-day course we will take you through a range of talks, discussions and hands on exercises designed introduce you to the practicalities of teaching machines to learn. We will focus on Deep Neural Networks and introduce participants to the popular and production-ready TensorFlow framework as they build, train and test their models to solve a range of problems in Image Classification, Text Processing and Data Clustering.

This course is aimed at those who have some familiarity with programming and a desire to understand how they might be able to use Machine Learning (ML) in their context. Aside from the practical development of ML systems this course will cover the limitations of ML, how we approach problems using ML and the challenges around validating and testing ML systems.

This is a highly practical course and at the end of this course, you will have working knowledge of ML and your own set of samples for how to build Deep Neural Networks to solve problems.

What you will learn on this course

This course provides a hands-on introduction to building, training and testing Machine Learning (ML) Systems; in particular we will focus on using Deep Learning models to solve problems in Image Classification, Text Processing and Data Clustering.

The course will cover:

  • Understanding what we mean by Machine Learning and how machines learn
  • The general approach to solving using Machine Learning including Data Preparation, Model Architecture, Training, validating and tuning models
  • Introduction to TensorFlow2 and Keras and using these frameworks to build, train and test models
  • Solving practical problems in Image Classification and Text Processing using modern ML architectures
  • Exploring possible uses of Machine Learning in your context
  • Validating and Testing Models

By the end of the course, participants will:

  • Understand the terminology of Machine Learning to enable you to explore more advanced topics after the course
  • Be able to identify uses of Machine Learning in their current context
  • Have a general approach to solving problems using Deep Learning Algorithms so that you can tackle your own problems
  • Be able to build models using TensorFlow and Keras
  • Understand the technical risks with Machine Learning
  • Thinking Critically about how we testing and validated Deep Learning Models

What do you need to bring?

Please bring an internet enabled laptop (any Operating System).
We will be using Google Colab to build our Machine Learning Models so you will need a Google Account and a recent Chrome browser installed.

Is this course for you?

I've heard Machine Learning is very Maths heavy and I'm not very good at maths, should I attend?
Absolutely, this course doesn't go into the inner working of Machine Learning so no maths is needed to take this course. We focus on developing intuition about concepts rather than providing formal mathematical proofs.

I’m an automation engineer, how will this help me?
The use of AI and Machine Learning is increasingly being used to support test activities in various ways such as, scheduling and prioritising tests, predicting areas most likely to contain issues based on commits, historical issues and code quality and triaging issues.

This course will introduce you to the foundational skills that will enable you to build such automated support in your context and to take AI in Testing in new directions.

I’m a manager/lead, can I benefit from this course?
If you are not to interested in the hands-on building you will benefit from the wider range of topics that introduce the terminology, the development process, how AI/ML can be used, how we validate models and how we test AI/ML systems. With this knowledge you can lead your team into AI/ML adoption and better prepare them for testing new AI/ML systems.

I don't have any programming skills; can I still attend?
We have deliberately kept the level of programming skills needed to a minimum so that participants can focus on building, training and validating Machine Learning Models so it should be accessible to most people. We expect some familiarity with the Python programming language to the level that you can read and understand simply Python Code.

We will make some preparatory material available before the course start to those that need this.

What languages and tools will we be using?
We will be building and training our models in Google Colab (https://colab.research.google.com) so there is nothing to set-up on your laptop.

The primary language used will be Python (specifically Python 3) but you don't need to be a Python programmer to benefit this course (the ability to read and understand Python is about all that is required).

We will be building our models using Google Tensorflow (https://www.tensorflow.org/) and Keras (https://keras.io/); these provide high level APIs that simplifies building Machine Learning systems. We will cover everything you need to know about these during the course.

Bill Matthews

Bill Matthews has been a freelance test consultant for over 20 years working mainly on complex integration and migration as a Test Architect and as a Technical Lead. He champions the use of modern and effective approaches to development and testing.

He is a regular contributor to the testing community at both local and international levels through conference speaking, coaching/mentoring and delivering workshops and training focusing on automation, performance, reliability, security testing and more recently artificial intelligence.


Workshops
Wednesday, 2nd October 2019:
Morning Sessions

Software teams often don’t control all the pieces and parts of their development or delivery environments. Especially in large organizations, multiple teams, each with different focuses and goals, are needed to deploy new and updated features. Seemingly unimportant details have potential to derail a project or a design, before you’ve had time to adjust. How can you and your team become aware of external impacts, and find ways to get value out to customers? In this workshop will help participants learn to use dependency mapping to identify all the other people, resources, external organizations and more that can impact a software delivery team.

Takeaways

Goals:

  • Learn dependency mapping and the visual and informational value they provide
  • Learn how to collect data about your application’s dependencies, and how to turn that into visual maps to help your team discuss and manage those dependencies.
  • Interactions with other participants help create maps and show the power of understanding a system at different levels. This technique teaches how brain storming with peers along with showing the power of visual communication can achieve results in a short period of time.
Melissa Eaden
Melissa Eaden has worked for more than a decade with tech companies such as Security Benefit, HomeAway, ThoughtWorks, and now Unity Technologies. Melissa’s previous career in mass media continues to lend itself to her current career endeavors. She enjoys being EditorBoss for Ministry of Testing, supporting their community mission for software testers globally. She can be found on Twitter and Slack @melthetester.
End-to-end integration plays a strong part in testability, unfortunately when an application grows, these kind of tests become a burden: brittleness, slower feedback and overall poor return on investment to improve quality.
 
Contract testing brings an alternative approach for validating integration points in fast-changing distributed systems. Because contracts don’t need integration environments, they can give very fast feedback to prevent API and messaging breaking changes from being introduced early-on.
 
Contracts are also a catalyst for inter-team communications. They help interactions between services become a central attribute in designing solutions, as opposed to an emergency concern when they break at a late integration stage.
 
This workshop covers the core concepts of contracts testing and contracts can play a part in reducing the struggles of integration tests. The attendees will be working on practical examples of defining contracts between teams and services, as well as implement them using the Pact tool-chain.
 
High-level schedule of workshop (half-day)
  • Overview contract testing concepts and Pact (~45mins)
  • Hands-on activity: defining contracts between teams (~45 mins)
  • How-to & hands-on activities: using Pact to implement contract generation and validation (~90 min)

Takeaways

After this workshop, participants should:
  • Have a good understanding of the consumer-driven contract testing pattern
  • Know when and where to use contract tests instead of integration tests
  • Be able to implement simple Pact tests for both consumer and providers
Pierre Vincent
I am originally from a Software Development background and the rise of DevOps drove me to become more involved in how systems actually run in the real-world, and how I could make a difference helping others care about the applications they release to production. I am currently Infrastructure & Reliability Manager at Poppulo, where I'm responsible for our continuous delivery platform and the operations of our hybrid on-prem/cloud infrastructure.
The world of software development is ever changing. As testers, we need to not just adapt but also embrace change in order to stay relevant and bring value. Luckily, we have a great tool that will always be useful, regardless of the environment: our mind. However, because we are governed by our biases, the mind can also play tricks with us and that can be problematic for effective testing. That’s why learning more about how your mind works is important. We can apply what we learn in two ways: to improve ourselves and to teach and coach others around us. This workshop is a highly interactive one with several exercises around cognitive biases, learning techniques, observing and coaching combined with theory of the subjects.

Takeaways

  • Improving yourself
    • Learning to be aware of your cognitive biases and work with them
    • The role of mindfulness and techniques to employ when you feel stressed
    • Observing your personal habits & work with your circle of influence (The 7 Habits of Highly Effective People)
    • Learning how you learn
  • Teaching others & Coaching
    • The power of observing and asking questions
    • Identify ‘mind busting’ risks in your team
    • Creating an environment around inclusion
       
 
Maaike Brinkhof
Maaike is an independent agile tester. She loves testing because there are so many ways to add value to a team, be it by thinking critically about the product, working on the team dynamics, working to clarify the specs and testability of the product, getting the whole team to test with Exploratory Testing…the options are almost endless! She likes to help teams who are not sure where or what to test. After reading “Thinking, Fast and Slow” by Daniel Kahneman she developed a special interest in the role of psychology in software development. During ‘analogue time’ Maaike likes to practice yoga, go for a run, check out new local beers, play her clarinet and travel with her boyfriend.
Göran Kero

Göran is a context-driven tester and an agile enthusiast who is always curious about learning new things. He is passionate about finding out what the customers really need and find ways for customers and developers to have better cooperation and interaction.


Afternoon Sessions

An introduction to how anyone can do some accessibility testing - without any prior knowledge of this area.

Accessibility refers to people with all sorts of disabilities, not just deaf or blind people but people like me too.

My session will cover 10 ways anyone can do some accessibility testing on a Windows or Mac computer. It will include why they are useful, how to setup and use some tools, my mistakes, epiphanies, findings, and why they can make a difference. The tools include: Colour Contrast Analyser and Font Face Ninja.

I am also hoping to bust the myths that accessibility testing is too difficult or not relevant or the same as usability and only affects a small number of people/users, by defining accessibility, explaining how people like myself struggle and the simple ways we can test for accessibility - helping to provide concrete evidence to support improvement requests.

Takeaways

  • Confident in using some of the tools back in your day-to-day lives/jobs
  • Understanding of the issues faced by people with accessibility challenges
  • Encourage you to challenge your ways of working to test in different ways to find accessibility problems
Deborah Lee
I am excited to run this workshop at Test Bash Manchester 2019 and have been loving testing since I started about 8 years ago.

You test for a few hours. You find several bugs.  How would you report these bugs to your developers? Where do you start? What do you focus on? Later that day, the product owner stops by. Do you report these bugs in the same way? And what do you say when a fellow tester asks you how the testing is going?

These people do not care about the same information. And you communicate with more roles than just these. But you want to inform these stakeholders in a way that fits their interests.

In this workshop, finding the bugs won’t be the focus; reporting them will. We’ll start with the fundamentals of a good bug report. We’ll discover which people at your company might be interested in your testing, and how to tailor your reports to them. Starting small with a bug report, we’ll build to a full test report. Finally, with our audience in mind, we’ll circle back to think about how the response to our test reports will impact our test planning in the future.

Takeaways

When you go back to work, you’ll have a better idea of:

  • Who the audience is for your testing at your company
  • What you might tell those people about your testing
  • How your testing might change to provide different information
Joep Schuurkes
Joep wandered into testing in 2006. Two years later he discovered context-driven and realized that testing is actually not that bad a fit for a philosopher. In 2015 he joined Mendix, one reason being that he wanted to increase his technical skills. Since then he's become more and more involved in automation, while also taking on the roles of scrum master, then team lead, and as of October 2018 tech lead testing. Somewhere along his journey (some say it was 2013) he started speaking at conferences and joined DEWT. Outside of work his thoughts tend to revolve around coffee, fountain pens and bouldering.
Elizabeth Zagroba
Elizabeth Zagroba is a Test Engineer at Mendix in Rotterdam, The Netherlands. She was the keynote speaker at Let’s Test in South Africa in 2018, and she’s spoken at TestBashes and other conferences around North America and Europe. Her article about mind maps became one of the most viewed on Ministry of Testing Dojo in 2017. You can find Elizabeth on the internet on Twitter and Medium, or you can spot her bicycle around town by the Ministry of Testing sticker on the back fender.

"There is lots of existing code in the world that lacks unit tests. In this workshop you will learn a technique for quickly adding regression tests, and in what situations you can use it."

First, we will demonstrate some techniques for getting awkward code under test quickly, on a well-known Code Kata (Gilded Rose). The presenter will show how to create a side-effect free test function, then use Combination Approvals to get 100% coverage of the code. Then we use Mutation testing to assess how good our tests are. In the second part of the workshop, we will tackle a different Code Kata, this time with wider participation.

We will use Mob programming, a technique where a group of people collaborate together to write code, guided by a facilitator. It is a good thing if the group contains people with diverse backgrounds and viewpoints on development and testing. Everyone in the workshop will see the Combination Approval technique in action on two different problems, and learn about which situations are best to use it in.

Emily Bache
Emily Bache is a Technical Agile Coach with Praqma. She helps teams to improve their coding and testing skills, including Test-Driven Development. Emily lives in Göteborg, Sweden, but is originally from the UK. She is the author of "The Coding Dojo Handbook" and often speaks at international conferences.
Saturday, 5th October 2019:
Allday Sessions

We’re seeing it as an initiative to get people talking more, and perhaps go a bit deeper on some topics. Those topics could be anything, even what you may have heard at the conference. By deeper, we mean many things, such as discussions and debates. Plus more hands-on things such as tool demos, coding and some actual testing. It could be anything.

So the TestBash Brighton open space will essentially take the form of an unconference. There will be no schedule. Instead we, and I really do mean we, all attendees, will create the schedule in the morning. Everyone will have the ability to propose a session, in doing so though, you take ownership of facilitating the said session. Once everyone has pitched their session ideas, we will bring them all together on a big planner and create our very own conference. Depending on the number of attendees we expect to have 5-6 tracks, so lots of variety.

Open Space is the only process that focuses on expanding time and space for the force of self-organisation to do its thing. Although one can’t predict specific outcomes, it’s always highly productive for whatever issue people want to attend to. Some of the inspiring side effects that are regularly noted are laughter, hard work which feels like play, surprising results and fascinating new questions. - Michael M Pannwitz

It really is a fantastic format, it truly allows you get to answers to the problems you are really facing, whereas with conference talks you are always trying to align the speaker's views/ideas to your context, with this format you get to bring your context to the forefront.

Richard Bradshaw
Richard Bradshaw is an experienced tester, consultant and generally a friendly guy. He shares his passion for testing through consulting, training and giving presentation on a variety of topics related to testing. He is a fan of automation that supports testing. With over 10 years testing experience, he has a lot of insights into the world of testing and software development. Richard is a very active member of the testing community, and is currently the FriendlyBoss at The Ministry of Testing. Richard blogs at thefriendlytester.co.uk and tweets as @FriendlyTester. He is also the creator of the YouTube channel, Whiteboard Testing.
Conference
Thursday, 3rd October 2019

18 months ago I joined a new company as a tester and was part of a new team that was forming. The decision to use Kanban had an enormous influence on how we tested as a team.

My personal story of testing with Kanban includes

  • No split between development and testing - nothing was considered done until it was tested.
  • Everyone became a tester.
  • Supported a quality culture and test coaching
  • "Quality gate" became kanban stages
  • Production issues were prioritised over feature work.
  • How we solved a serious bottleneck in our testing and how it improved overall quality.

Takeaways

  • Kanban is more than a  visual workflow management tool
  • Kanban can support improved quality
  • Makes everyone a tester 
  • Improves collaboration and communication 
Conor Fitzgerald
Based in Cork, Ireland. Software tester with over 10 years experience. I love testing and continuously work on improving as a tester. Experience gained through a variety of testing roles in a wide variety of industries from embedded systems to financial systems with companies ranging from startups to large multinationals like Intel. I am the co-founder of the Ministry of Testing Cork. My hobbies include kayaking, hill walking, gym, Toastmasters and yoga.

Testing is an afterthought in far too many teams. By only involving testers after software has already been developed, we lose the opportunity to have valuable insights into what might go wrong or to explore areas that developers and product owners have yet to consider.

Moreover, in a Continuous Delivery world, this is simply no longer viable: we need quality in upfront and throughout our work. We no longer have time to find bugs later and redevelop.

We need quality beyond testing. We need testers involved throughout the process.

In this talk, I'll show you how to break out of the test column, and provide you with tools to use (like Example Mapping and Feature Toggling) in order to work better with your entire team. Through collaboration, communication, and exploration, I'll show you how to help your team from the very beginning of development, not just at the end.

Takeaways

  • Learn how to change your teams process to get testers involved earlier.
  • Explore Example Mapping as a technique for getting questions of quality asked earlier.
  • Use Feature Toggles to manage testing in a CI/CD world.
  • See how we can go beyond our tools to have meaningful conversations about doing better.
Gary Fleming
Gary is an agile provocateur, software crafter, and lean mercenary. His main hobby is to try and help companies to build better software in better ways. Sometimes this is by helping them with the messy human communication side of agile, and sometimes it's through teaching better software crafting practices - but it's usually at least some of each. Coaching, mentoring, writing, and showing; whatever helps in the context. You'll find him at various local meetups trying to both share what he knows and learn from his peers.

“This is scaring me as hell - and that’s exactly why I need to do it!” Can you relate to that? This year, my personal challenge was to become code-confident. As a tester, I felt the need to up my game here. My hypothesis: “I believe that doing many small hands-on coding exercises and challenges, on my own as well as together with other interested people, will result in increased confidence in my programming skills. I’ll know I have succeeded when I have developed a small product from scratch.”

What I did was to finally use my GitHub account and create my very first public repository. I called for collaboration and found people willing to review my code and to pair up with me on further challenges. Rinse and repeat! The question to be answered: was jumping headfirst a viable way to improve my coding skills?

Join me on this walk through my code - along with the stories of my struggles, the solutions found, and the lessons learned.

Takeaways

  • Understand why honing our skills is essential for us as testers
  • Learn how scary challenges can help you grow in short time when you have people who support you
  • See how pairing speeds up your learning journey and make more things possible than you would have ever imagined
Elisabeth Hocke
Having graduated in sinology, Lisi fell into agile and testing in 2009 and has been infected with the agile bug ever since. She’s especially passionate about the whole-team approach to testing and quality as well as the continuous learning mindset behind it. Building great products which deliver value together with great people is what motivates her and keeps her going. She received a lot from the community; now she’s giving back by sharing her stories and experience. She tweets as @lisihocke and blogs at www.lisihocke.com. In her free time you can either find her in the gym running after a volleyball, having a good time with her friends or delving into games and stories of any kind.

It is around 16:00 on a Friday afternoon and somebody from the other end of the office says out loud: “Who is working on fixing the API tests?” A silence, as thick as a morning fog, covers the entire floor. Nobody has done anything. Slowly, all eyes are turn towards the tester. How could she allow this to happen? When the API tests fail, the deployment pipeline is broken. When the pipeline is broken, there can be no deployment to production. The hope of a work-free weekend is slipping away…

This could be the beginning of a fictional, admittedly a bit corny, story. Unfortunately, in my experience, this is not something unheard of in an Agile software development team, where quality is a whole-team effort and not just the tester’s obligation.

So, how do we create a clear action plan to avoid such drama scenes?

In the first part of my presentation, we will identify some of the causes of such a situation, for example

  • the “I thought the tester would inform me if something is wrong” misconception,
  • the “I don’t trust the developers and I should always be responsible for alerting them” mentality, and
  • the thorny problem of not having a unified team perception of the importance of each test.

In the second part, we will explore the solutions that can remove each of the causes and look into ways that they can be applied by an Agile team. Finally, we will discuss how we, as testers, can facilitate and monitor the implementation of the solutions and the benefits of turning the quality of the deployment pipeline indeed into a team task.

Takeaways

  • Address the situation where “everybody is responsible for quality” turns into “nobody is responsible for quality”
  • Identify simple techniques to distribute quality responsibilities within the team
  • Establish a transparent process to keep the test steps of your deployment pipeline in a pristine state
Areti Panou
A mathematician by vocation, a software tester by profession, I am now working as a cloud quality coach, helping development teams within SAP come up with cloud test strategies that best fit their needs. Before that, I was the sole tester in one of the first products at SAP, to put a full Continuous Delivery approach into action and the first one in SAP history to daily deliver to its customers.

I wanted to become a better tester, so I asked myself three questions:

  1. Am I prepared to be vulnerable? Yes!
  2. Am I committed to being genuinely curious? Yes!
  3. Am I willing to be empathetic? Yes!

These three skills are the core of “Humble Inquiry”, and by practising them, I have become a better tester.

“Humble Inquiry”, a technique defined by Edgar Schein, is “the fine art of drawing someone out, of asking questions to which you do not know the answer, of building a relationship based on curiosity and interest in the other person”.

In my talk, I’ll share my story of learning this technique, teach people how to use it, and inspire people to use it themselves to become better testers.

More specifically, my talk has three components:

Being Vulnerable

I’ll talk about:

  • Why I can find it so hard to be vulnerable
  • Examples of me failing to achieve my goals because I refused to be vulnerable and ask for help
    • E.g. Refusing other people's help when testing an unfamiliar part of a product, despite not having a clue what was going on
  • How I learned to use the technique of “Here-and-now Humility” (recognising that I am dependent on somebody else at this moment to achieve my goals) to overcome the fear of being vulnerable
  • Worked examples to teach people how to use “Here-and-now Humility” to achieve their goals

Being Curious

I’ll talk about:

  • Why I can find it hard to be genuinely curious
  • Examples of poor exploratory testing I’ve done due to a lack of curiosity
    •  E.g. falling prey to confirmation bias when testing familiar areas of a product
  • How I learned to trigger curiosity by:
    • Learning to truly feel that I don’t know the answer
    • Learning to truly want to know the answer
  • Worked examples to teach people how to trigger curiosity in their exploratory testing

Being Empathetic

I’ll talk about:

  • Why I don’t always respond with empathy when confronted with problems or conflict
  •  Examples of where I’ve been a poor mentor or alienated teammates due to a lack of empathy
    • E.g. upsetting my teammates by trying to force them to use a process that they didn’t want to use
  • How I learned to use “non-violent communication” as a default response to problems
  • Worked examples to teach people how to use “non-violent communication” when dealing with problems

Takeaways

  • Achieve your goals by allowing yourself to be vulnerable
  • Enhance your exploratory testing by learning to cultivate curiosity
  • Build better relationships through practising non-violent communication
Kwesi Peterson
I'm Kwesi, a 24-year old born and raised in London. Having done a Maths degree at Cambridge, I decided to go into software. I've been working at Metaswitch for just over 2 yeras now, and I'm currently a Test Lead, testing our Networking Protocol Stack. A typical work week involves a mix of exploratory testing, coaching, managing deliveries, improving processes, and prototyping new ideas. In my spare time, I spend most of my time doing sport - football, boxercise, touch rugby, you name it! I also spend a lot of my time reading about rationality, biases and understanding the psychology of people. I also enjoy relaxing with a pint or two with friends.
Being able to observe the state of a running application is key to understanding a system’s behaviour and essential if you want to test and debug problems efficiently. Like a lot of other things, this is harder to do in distributed systems than it is with a monolith.
 
At my company we’ve been running our SaaS product as a distributed system of hundreds of microservices in production for more than 4 years and we got to understand how critical this visibility is.
 
If you want to succeed with testing and operating a distributed system, observability should be an integral part of system design. I’ll cover key techniques to build a clearer picture of distributed applications in production, including details on useful health checks, best practices for instrumentation with metrics, logging and tracing.
 

Takeaways

The takeaway of this talk for the audience is to understand why observability is an important part of system design and how different techniques can improve the operability of complex systems.
 
  • Production and getting comfortable with failure
  • Distributed systems operability challenges
  • Observability techniques: health-checks, metrics, logging, correlation and tracing
 
Pierre Vincent
I am originally from a Software Development background and the rise of DevOps drove me to become more involved in how systems actually run in the real-world, and how I could make a difference helping others care about the applications they release to production. I am currently Infrastructure & Reliability Manager at Poppulo, where I'm responsible for our continuous delivery platform and the operations of our hybrid on-prem/cloud infrastructure.

After being inspired by an Engineering related episode of 'The Magic School Bus Drives Again', which I (Yong) watched with my 5 year old and realised, that if I can explain my job as a Software Test Engineer to a child with this episode, I can do it with anyone!
But for the sake of having a clear message, the essence of the episode was extracted: how to embrace failure and why it is important in order to success.

We've constructed the talk using relevant and short clips of the said episode, prepared a talk called 'The Wheels of the Bus go Fail, Fail, Fail' and tested it in an internal test conference on a 40-head strong test team and though it got very positive feedback, it has been improved ever since for each time it has been presented (Internal conference for Engineering and Product, a local meetup etc.).

The talk is dynamic, light but straight to the point showing examples why testing goes hand in hand with engineering and is essential for development, troubleshooting, and safe-proofing (as much as possible) for future.

The talk points out how creative testing can lead to unexpected issues, how perfection is not guaranteed by time and effort made but most importantly, how communication and not giving up will lead to eventual success, even if it means that to succeed is to fail.

'The Wheels on the Bus Go Fail, Fail, Fail' shows that failing often and bouncing back and re-testing saves time and funds but the value of a problem solving software goes beyond commercial requirements.

Our talk is suitable for both beginners and advanced Testers, Developers, UX Designers, Product Owners, Agile Coaches and Scrum Masters or anyone who is interested.

Our slides (under 20) will have video clips with audio to support our arguments for using failures as ways of learning and discovery but will have subtitles for added accessibility.

Takeaways

  1. It is okay to fail - it is a valuable experience to build a future on. Failure is not a blocker in development, but a stepping stone to improve.
  2. Issues can be found later in the development/pipeline - always stay alerts for new issues. Think out of the box -validate with issues, not with assumptions
  3. Owning up a failure is a team effort
  4. Failure in communication leads to unnecessary delays and later rather than sooner discoveries.
  5. Managing your own and stakeholders expectations on failproof products
  6. Replacing stagnation in Utopia with continuous retesting, failing and bettering one's self.
Yong Yuen He | Daniel Smart
We are Software Test Engineer and Test Engineering Lead at BookingGo with a focus on Ways of Working, Agile Delivery and Test Advocacy. Yong has been in the Tech industry for 2 years facing new challenges on the daily basis and Daniel is a experienced tester and manager with a decade of service. Yong Yuen He Daniel Smart

This is the story of how a client lost millions due to a costly oversight that allowed attackers to exploit a devastating vulnerability. Although the client was aware that this weakness existed when the final product was launched, it would have been too expensive to fix and would have required them to miss critical deadlines.

In this talk, we'll discuss how with version 2, we helped our client by starting with some threat modelling techniques in order to understand; which assets an attacker would be after, what weaknesses existed in the design that would allow and attacker to access them, and what protections could be put in place to stop the same level of attack happening again.

Takeaways

  • How we can use threat modelling to think like an attacker
  • How threat modelling can help us secure our applications and how software testers can integrate this technique into the testing process
  • Why thinking about security as early as possible is the safest option
Saskia Coplans

Saskia is Security Consultant and Director at Digital Interruption. She a registered Data Protection Officer (DPO) and a privacy specialist with over ten years of experience in information security and governance. Along with standards and policy development, she has developed risk-based defensive security strategies across Europe and Central Asia for Governments, NGO’s, Regulators and the Private Sector.

Saskia is a founder of the InfoSec Hoppers, a group of women confronting the gender gap in InfoSec by working together to highlight diversity issues in the industry and make conferences, events and meetups more accessible. She sits on the board or OWASP Manchester and Manchester Grey Hats.


Micro Sponsors: