TestBash Brighton 2020

TestBash Brighton, the home of TestBash, is back for its 9th year in The Clarendon Centre.

TestBash Brighton 2020 is set to be our biggest software testing conference to date with a jam-packed five days. We’re opening the week with three 3-day courses by Richard Bradshaw and Mark Winteringham, Janet Gregory, Dan Ashby and Karo Stoltzenburg. We follow that with 10 half-day workshops and a full day Essentials@TestBash workshop. Concluding the week with our beloved single track conference day, TestBash, where we’ll have ten thought-provoking talks and our new Community Space.

Pro Ministry of Testing members get £50 off the workshops and conference day! Not Pro? Sign up today and save on your TestBash tickets, but also get access to every past TestBash talk, online courses and a whole host more.

Quick Look

Speakers

Lena Wiberg
Lena Wiberg
Consultant manager
Elizabeth Zagroba
Elizabeth Zagroba
Test Engineer
Elizabeth Fiennes
Elizabeth Fiennes
Lead Test Engineer
Lindsay Strydom
Lindsay Strydom
Senior QA Engineer
Gareth Waterhouse
Gareth Waterhouse
Lead QA Engineer
Konrad Marszałek
Konrad Marszałek
Senior Quality Engineer
Gopinath Langote
Gopinath Langote
Software Engineer
Mark Winteringham
Mark Winteringham
DojoBoss
Richard Bradshaw
Richard Bradshaw
BossBoss
Janet Gregory
Janet Gregory
Agile Testing Coach
Huib Schoots
Huib Schoots
Tester, Coach, Consultant and Trainer
Ana Maria del Carmen Garcia Oterino
Ana Maria del Carmen Garcia Oterino
Senior SDET
Ashley Hunsberger
Ashley Hunsberger
Director of Release Engineering
Jitesh Gosai
Jitesh Gosai
Principal Tester
Dan Ashby
Dan Ashby
Head of Quality Engineering
Karo Stoltzenburg
Karo Stoltzenburg
Senior Test Engineer
Essentials Hosts
Essentials Hosts
 
Meaghan Thompson
Meaghan Thompson
QA Manager
Maryam Umar
Maryam Umar
Head of Quality
Jesper Ottosen
Jesper Ottosen
Senior Test Manager
Kevin Harris
Kevin Harris
Test Manager
João Rosa Proença
João Rosa Proença
Quality Owner
Shey Crompton
Shey Crompton
Managing Director
Pradeep Soundararajan
Pradeep Soundararajan
CEO of Moolya Testing
Michaela Greiler
Michaela Greiler
Software Engineer

Sponsors

Schedule

Monday, 23rd March 2020

Training

What Do We Mean By ‘Automation in Testing’?

Automation in Testing is a new namespace designed by Richard Bradshaw and Mark Winteringham. The use of automation within testing is changing, and in our opinion, existing terminology such as Test Automation is tarnished and no longer fit for purpose. So instead of having lengthy discussions about what Test Automation is, we’ve created our own namespace which provides a holistic experienced view on how you can and should be utilising automation in your testing.

Why You Should Take This Course

Automation is everywhere, it’s popularity and uptake has rocketed in recent years and it’s showing little sign of slowing down. So in order to remain relevant, you need to know how to code, right? No. While knowing how to code is a great tool in your toolbelt, there is far more to automation than writing code.

Automation doesn’t tell you:

  • what tests you should create
  • what data your tests require
  • what layer in your application you should write them at
  • what language or framework to use
  • if your testability is good enough
  • if it’s helping you solve your testing problems

It’s down to you to answer those questions and make those decisions. Answering those questions is significantly harder than writing the code. Yet our industry is pushing people straight into code and bypassing the theory. We hope to address that with this course by focusing on the theory that will give you a foundation of knowledge to master automation.

This is an intensive three-day course where we are going to use our sample product and go on an automation journey. This product already has some automated tests, it already has some tools designed to help test it. Throughout the three days we are going explore the tests, why those tests exist, our decision behind the tools we chose to implement them in, why that design and why those assertions. Then there are tools, we'll show you how to expand your thinking and strategy beyond automated tests to identify tools that can support other testing activities. As a group, we will then add more automation to the project exploring the why, where, when, who, what and how of each piece we add.

What You Will Learn On This Course

Online
To maximise our face to face time, we’ve created some online content to set the foundation for the class, allowing us to hit the ground running with some example scenarios.

After completing the online courses attendees will be able to:

  • Describe and explain some key concepts/terminology associated with programming
  • Interpret and explain real code examples
  • Design pseudocode for a potential automated test
  • Develop a basic understanding of programming languages relevant to the AiT course
  • Explain the basic functionality of a test framework

Day One
The first half of day one is all about the current state of automation, why AiT is important and discussing all the skills required to succeed with automation in the context of testing.

The second half of the day will be spent exploring our test product along with all its automation and openly discussing our choices. Reversing the decisions we’ve made to understand why we implemented those tests and built those tools.

By the end of day one, attendees will be able to:

  • Survey and dissect the current state of automation usage in the industry
  • Compare their companies usage of automation to other attendees
  • Describe the principles of Automation in Testing
  • Describe the difference between checking and testing
  • Recognize and elaborate on all the skills required to succeed with automation
  • Model the ideal automation specialist
  • Dissect existing automated checks to determine their purpose and intentions
  • Show the value of automated checking

Day Two
The first half of day two will continue with our focus on automated checking. We are going to explore what it takes to design and implement reliable focused automated checks. We’ll do this at many interfaces of the applications.

The second half of the day focuses on the techniques and skills a toolsmith employs. Building tools to support all types of testing is at the heart of AiT. We’re going to explore how to spot opportunities for tools, and how the skills required to build tools are nearly identical to building automated checks.

By the end of day two, attendees will be able to:

  • Differentiate between human testing and an automated check, and teach it to others
  • Describe the anatomy of an automated check
  • Be able to model an application to determine the best interface to create an automated check at
  • How to discover new libraries and frameworks to assists us with our automated checking
  • Implement automated checks at the API, JavaScript, UI and Visual interface
  • Discover opportunities to design automation to assist testing
  • An appreciation that techniques and tools like CI, virtualisation, stubbing, data management, state management, bash scripts and more are within reach of all testers
  • Propose potential tools for their current testing contexts

Day Three
We’ll start day three by concluding our exploration of toolsmithing. Creating some new tools for the test app and discussing the potential for tools in the attendee's companies. The middle part of day three will be spent talking about how to talk about automation.

It’s commonly said that testers aren’t very good at talking about testing, well the same is true about automation. We need to change this.

By the end of day three, attendees will be able to:

  • Justify the need for tooling beyond automated checks, and convince others
  • Design and implement some custom tools
  • Debate the use of automation in modern testing
  • Devise and coherently explain an AIT strategy

What You Will Need To Bring

Please bring a laptop, OS X, Linux or Windows with all the prerequisites installed that will be sent to you.

Is This Course For You?

Are you currently working in automation?
If yes, we believe this course will provide you with numerous new ways to think and talk about automation, allowing you to maximise your skills in the workplace.
If no, this course will show you that the majority of skill in automation is about risk identification, strategy and test design, and you can add a lot of value to automation efforts within testing.

I don’t have any programming skills, should I attend?
Yes. The online courses will be made available several months before the class, allowing you to establish a foundation ready for the face to face class. Then full support will be available from us and other attendees during the class.

I don’t work in the web space, should I attend?
The majority of the tooling we will use and demo is web-based, however, AiT is a mindset, so we believe you will benefit from attending the class and learning a theory to apply to any product/language.

I’m a manager who is interested in strategy but not programming, should I attend?
Yes, one of core drivers to educate others in identifying and strategizing problems before automating them. We will offer techniques and teach you skills to become better at analysing your context and using that information to build a plan towards successful automation.

What languages and tools will we be using?
The current setup is using Java and JS. Importantly though, we focus more on the thinking then the implementation, so while we’ll be reading and writing code, the languages are just a vehicle for the context of the class.

Mark Winteringham

I am a tester, coach, mentor, teacher and international speaker, presenting workshops and talks on technical testing techniques. I’ve worked on award winning projects across a wide variety of technology sectors ranging from broadcast, digital, financial and public sector working with various Web, mobile and desktop technologies.

I’m an expert in technical testing and test automation and a passionate advocate of risk-based automation and automation in testing practices which I regularly blog about at mwtestconsultancy.co.uk and the co-founder of the Software Testing Clinic. in London, a regular workshop for new and junior testers to receive free mentoring and lessons in software testing. I also have a keen interest in various technologies, developing new apps and Internet of thing devices regularly. You can get in touch with me on twitter: @2bittester


Richard Bradshaw
Richard Bradshaw is an experienced tester, consultant and generally a friendly guy. He shares his passion for testing through consulting, training and giving presentation on a variety of topics related to testing. He is a fan of automation that supports testing. With over 10 years testing experience, he has a lot of insights into the world of testing and software development. Richard is a very active member of the testing community, and is currently the FriendlyBoss at The Ministry of Testing. Richard blogs at thefriendlytester.co.uk and tweets as @FriendlyTester. He is also the creator of the YouTube channel, Whiteboard Testing.

This unique and practical course was developed by Lisa Crispin and Janet Gregory based on their popular books, Agile Testing: A Practical Guide for Testers and Agile Teams and More Agile Testing: Learning Journeys for the Whole team. Participants learn ways the whole delivery team can collaborate to plan and execute the many different testing activities needed to build quality into their product. Through lecture, discussion and hands-on exercises, the course explains essential principles and practices, including:

  • How testing fits into the short iterations and frequent deliveries in an agile manner, including adoption of continuous delivery
  • Contributions testers can make to become valued agile team members, and help with adopting approaches such as DevOps
  • Common cultural and logistical obstacles in transitioning to an agile development process
  • Values and principles that help team members adopt an agile testing mindset
  • How the whole team contributes to the success of testing practices, such as acceptance-test driven development (ATDD), test automation, and exploratory testing

The course is filled with real-life examples of how teams collaborate to deliver high-value, high-quality software. A simulation wraps up the whole course giving participants an opportunity to put all the puzzle pieces together, and practice what they have learned. Participants leave with practical skills and techniques that they can start using right away.

Who Should Take This Course?

The course is ideal for testers, developers, iteration facilitators, team leads, managers, anyone on who wants to learn what testing means on an agile team. Everyone will benefit from understanding their contribution and the interaction with testers on the team. Basic agile knowledge is recommended so the participants can actively contribute with questions and shared experiences.

What You Will Learn On This Course

Course Outline
Each module includes small group exercises and discussions in addition to the major exercises listed.

Day 1

Agile: What is it and How Testing Fits in? – Module 1

  • Overview of agile terminology and principles
  • Introduce agile testing activities and approach

Adapting to Agile - Module 2

  • The whole-team approach
    • Roles and responsibilities; collaboration
  • Overcoming common obstacles
    • Cultural Issues; mini-waterfalls
  • Transitioning typical processes
    • Defect tracking, quality models, traceability

Making Test Automation Work

  • Using Automation So testing “Keeps up”
    • Value of automation
    • Barriers to Automation
  • Developing an Agile Automation Strategy
    • Using the Test Automation Pyramid for maximum benefit
    • What should and shouldn’t be automated
    • A bit about test design
  • Applying agile principles
  • Evaluating tools and managing automated tests

Day 2

Testing Activities at the Release and Feature Level – Module 4

  • Levels of precision / dependencies / multiple levels
  • Slicing stories, with thin slice / steel thread approach
  • How testers contribute to sizing your stories
  • Alternatives to large test plans; release-level test matrix
  • Discussions on test results, metrics, coverage

Testing Approaches for Agile Testing - Module 5

  • Guiding development with tests (ATDD)
  • Using the Agile Testing Quadrants - vocabulary, benefits
  • Exploratory Testing
  • Testing for Quality Attributes

Day 3

Testing Activities during the Iteration – Module 6

  • Story Readiness
  • Iteration Planning - roles, creating tasks
  • During the Iteration - Coding & Testing
    • Collaboration
    • Expanding tests, exploratory testing
    • Customer acceptance, regression tests,
  • Wrap-up of the iteration – demo, retrospectives

Iteration Simulation

  • Includes iteration planning, code and test, automation

The End Game - Module 7

  • What is the end game, and what is required for successful delivery

Key Success Factors & Wrap-Up - Module 8

  • Seven Factors for Agile Testing Success, and Confidence Building Practices

Wrap-Up

  • Discussion back to original problems that participants are experiencing
Janet Gregory

Janet Gregory is an agile testing coach and process consultant with DragonFire Inc. She is the co-author with Lisa Crispin of Agile Testing: A Practical Guide for Testers and Agile Teams (Addison-Wesley, 2009), and More Agile Testing: Learning Journeys for the Whole Team (Addison-Wesley 2014), the Live Lessons Agile Testing Essentials video course, and “Agile Testing for the Whole Team” 3-day training course.

Janet specializes in showing agile teams how testing practices are necessary to develop good quality products. She works with teams to transition to agile development and teaches agile testing courses worldwide. She contributes articles to publications and enjoys sharing her experiences at conferences and user group meetings around the world. For more about Janet’s work and her blog, visit www.janetgregory.ca or www.agiletester.ca.


A three-day intensive training course designed to help you develop essential knowledge and skills in software testing and grow your testing career.

Why You Should Take This Course

Working in the world of software testing can be challenging. There are so many variables within testing to consider. There are lots of different perspectives on what testing is, how to do testing, and how to define and measure quality. Not to mention working out what direction to take your testing career and what areas to specialise in.

The Testing Essentials Intensive training course has been designed to help you overcome these challenges, give you answers to your questions and grow an amazing career in testing. Taking this course will help you develop a concrete understanding of what testing is and how it works. Our goal is to help you develop your knowledge in the most important areas of testing and gain essential skills that you can start using straight away.

Testing Essentials Intensive is delivered by skilled trainers who are accomplished experts in testing and quality. This course is taught using modern blended learning methods that allow you to build core software testing competencies through active, collaborative sessions with practical hands-on experiences.

What You’ll Learn on This Course

Online

Bespoke online content will be made available to you prior to the start of the course that will give you underpinning knowledge ready to make the most of the 3-day course and hit the ground running!

After completing the online materials, you will be able to:

  • Name some of the most common terminology used within the software development industry
  • Follow your fellow classmates and share relevant testing information
  • Understand what the key skills relating to testing are
  • Understand the possible career trajectories are for testers

Day One

To kick-start the course, we will explore what testing is, reveal common testing misconceptions and begin to examine what test charters and exploratory testing are.

In the second half of day one, we will carry out exploratory testing sessions and deep dive into risks and risk-based testing.

By the end of day one, you will be able to:

  • Explore different definitions of testing
  • Critique misconceptions about testing
  • Describe the different activities a tester does
  • Examine the traits of a tester
  • Produce a personal definition of what testing is
  • Describe what a Test Charter is
  • Describe different templates to use when creating Test Charters
  • Construct Charters based on specific Risks
  • Describe what Exploratory testing is
  • Carry out Exploratory testing sessions
  • Describe what a Risk is
  • Apply different techniques for discovering risks
  • Hypothesise different types of risks that might affect a product or project
  • Construct testing activities from risks

Day Two

In the first part of day two, we’ll examine the concepts of agile, how it affects the Software Development Life Cycle and form an agile test strategy based on your current working context.

In the second part, we’ll move onto analysing, applying and evaluating different approaches to testing notes and reporting.

By the end of day two, you will be able to:

  • Describe the concepts of agile and how it affects software development
  • Define different types of approaches found in agile teams
  • Discuss why requirements testing is important and how to do it
  • Use a range of techniques to test requirements
  • Reflect on your current working context
  • Evaluate your SDLC context to determine opportunities to test requirements
  • Name 6 different approaches to note-taking
  • Explain why note-taking in testing is important
  • Apply different note-taking approaches to different testing scenarios
  • Evaluate the pros and cons of note-taking approaches when testing ideas and testing products

Day Three

The key goals of day three is being able to report on your testing and start to be able to explain software testing to others, advocating your views on testing. Additionally, forming your own test strategy, in line with your own context.

By the end of day three, you will be able to:

  • Explain why and how to report your testing
  • Compose reports on your testing and on quality
  • Describe different forms of communication and why communication is important
  • Discuss some of the challenges surrounding communication and how to overcome them
  • Explain software testing fluently to others
  • Apply techniques to help generate conversations about testing
  • Evaluate the pros and cons of different communication methods in general
  • Choose testing activities to form a test strategy based on your current context

Is This Course Right for Me?

Testing Essentials Intensive has been designed for anyone looking to improve their testing by taking a short, thorough, hands-on training course. Whether you’re looking for a career change, a new or aspiring tester, or someone looking to fill in gaps in your knowledge and skills, this course is for you.

Do I need to know or have done any testing beforehand?

Simply put, no. You don’t need to have had any previous experience in software testing. The Testing Essentials Intensive course is focused on building foundation knowledge and skills in essential areas of testing. You will be supported by skilled instructors with professional testing experience who will introduce you to the craft of testing in an accessible and supportive format.

What Do I Need To Bring?

Enthusiasm to learn, a mobile device and a laptop.

Dan Ashby
Dan is a software tester and he likes Porridge! (and whisky!)
Karo Stoltzenburg

Karo currently enjoys working as a Senior Test Engineer at Linguamatics, who provides NLP text mining software in the life science and healthcare domain. Before joining the test team at Linguamatics she worked in different industries on E-commerce platforms, web applications and supply chain management solutions, often as the sole tester and in both agile and waterfall environments.

She loves that testing is such a diverse, creative and challenging activity and cherishes the opportunities for collaboration with other roles in the software development life cycle that come with it. Karo channels her urge to discuss and share anything testing as a co-organizer of the Ministry of Testing group in Cambridge, as a regular at the Cambridge Exploratory Workshop on Testing and through her blog (http://putzerfisch.wordpress.com). Having mentored at the London Software Testing Clinic several times, she’s thrilled to see the Clinic now coming to Cambridge. Find and engage with her on Twitter: @karostol.


Thursday, 26th March 2020

Workshops

All Day Sessions | 9:00am - 5:30pm

What is Essentials @ TestBash?

Essentials is a full day, hands-on event that will expose you to a wide range of testing topics that you will have the opportunity to explore.

The day is arranged around the carousel teaching approach, but you can think of it as lots of mini-workshops that focus on different aspects of testing. Topics include a wealth of activities ranging from exploratory testing and automation to testing ideas and testability.

Our mini-workshops are hosted by practising testers who are experienced in teaching and testing. Each workshop will be packed with opportunities to learn, reflect and hone your testing skills to help you go further in your testing and your career.

In addition to the carousel, there will also be awesome talks and the opportunity to take part in TestBash’s world-famous 99-second talks.

Who is Essentials for?

Essentials is designed for anyone looking to improve their testing. Whether you’re a new or aspiring tester or someone looking to fill in gaps in your knowledge and skills, TestBash Essentials is for you.

What will you get out of it?

  • Foundation knowledge and core skills that will enable you to explore the craft of testing further
  • Meeting other like-minded testers in your testing community
  • Access to a rich collection of resources in testing
  • Concrete activities and ideas that can be implemented back at work
  • The opportunity to not just learn theory around testing, but the opportunity to practise testing

Frequently asked questions:

Do I need to know or have done any testing beforehand?
Simply put, no, you don’t. The Essentials day is focused on building foundation knowledge and skills in testing. Throughout the day, you will be supported by numerous experienced testing professionals acting as mentors and facilitators. So even if you’ve never done any testing before, our day is designed to introduce you to the craft of testing in an accessible and supportive format.

What will I need to bring with me?
Enthusiasm to learn.

Essentials Hosts

This hands-on workshop will be lead by Dan Ashby and several of the Ministry of Testing Essentials Meetup hosts, formerly known as the Software Testing Clinic.


Workshops

Morning Sessions | 9:00am - 12:30pm
Ever find yourself in the middle of some weird behavior in your software wondering how you got there and whether anyone will care about what you find? Do you get the “why were you even looking there?” question when you report bugs? In my years of practice pair and mob testing with testers and developers from my product teams, I’ve see how easy it is to spend time getting lost in the product without looking for the information you need. Get lost no more! Learn to stay on track while capturing all the roads less travelled you discover while exploratory testing by using charters. 
 
In this workshop, we’ll experiment with testing whatever we want without a focus and compare that to more focused exploratory testing using charters. We’ll see how writing charters affects what information we uncover. We’ll get feedback on the specificity of our charters when we hand them off to other participants and review their test reports. We’ll practice describing our ability to go down multiple levels into thoughtful questions about our product. And we’ll discover how focusing our testing through charters changes the story we tell about our testing to our stakeholders.
 

Takeaways

- writing charters affects what information we uncover
- writing charters helps us to better prioritize our work based on risks
- writing charters uncovers deeper threads in our testing
- writing charters allows us to tell a clearer story to our stakeholders
Elizabeth Zagroba
Elizabeth Zagroba is a Test Engineer at Mendix in Rotterdam, The Netherlands. She was the keynote speaker at Let’s Test in South Africa in 2018, and she’s spoken at TestBashes and other conferences around North America and Europe. Her article about mind maps became one of the most viewed on Ministry of Testing Dojo in 2017. You can find Elizabeth on the internet on Twitter and Medium, or you can spot her bicycle around town by the Ministry of Testing sticker on the back fender.
Coaching isn’t about you and it isn’t about advice. Coaching is all about the journey that the coachee goes on. What you do as a coach is listen with wonder and curiosity and without judgement. It’s about unleashing the ability and the skills in everyone to help them be as awesome as they can be. Coaching someone is an amazing feeling. 
 
It can however be hard to find the people and the time to actually practice your coaching skills, especially in high pressure environments where you’re dealing with people in real life situations. 
 
This interactive and fun workshop will explore some of the more common coaching methods, just what we mean when we say powerful questions (a really useful skill to have as a Tester) and how to go about asking them. 

Please bring some problems (or decisions) along to the session that you'd like to be coached on! 

Takeaways

Practical and real life examples on:
  • What coaching actually is and why it’s so useful
  • When to coach and when not to coach someone
  • Learning how to actually coach someone to a positive outcome
  • Experience in coaching someone with a real life problem
Gareth Waterhouse
Seasoned QA with a decade of experience, father of two and Sunderland fan (please don't hold that against us). Regularly attends QA/Testing meetups and has given numerous presentations at meetups. Always looking at ways to develop others. Regularly blog about things that I think may be of interest.
Lindsay Strydom
Reformed Luddite and accidental QA with years of experience in testing e-commerce native and web applications. Keen gardener and charity shop enthusiast.

Do you test performance when your team develops new feature? Why not? Performance testing is often found as difficult and omitted by QA.  

I’d like to take you on a pragmatic trip through web application performance testing. We’ll start with low-effort activities taking rough assumptions to get immediate results. Then we’ll learn what drawbacks and trade-offs we’ve made and try to improve accuracy of our measurements.  

We’ll exercise various scenarios so that you can grasp broad, holistic approach to the topic:

  • Performance checks during exploratory testing (e.g. Fiddler, Charles proxy, Chrome dev tools)
  • Load generation (e.g. JMeter, Gatling)
  • Application Performance Management solutions (e.g. New Relic)
  • Utilising staging/dogfooding/demo environments to learn about performance
  • Data volume testing
  • Using Selenium to get client side performance metrics
 

I’ll not go into details of above aspects. I’ll cut theory to bare minimum. I’ll concentrate on specific, hands-on examples showing value of given approach. After each example you’ll be tasked to execute similar exercise during our workshop.  

Takeaways

  • Convince that performance testing is not a secret knowledge for chosen ones.
  • Learn variety of attempts to performance testing.
  • Learn portfolio of tools that will help you test performance.
Konrad Marszałek
I have 10-years of experience in software quality assurance divided by 2 cities (Kraków and Gdańsk) and 3 companies. I had pleasure to work for successful startup, middle-sized company with cloud product, and large company with product used by 30k customers. I like to implement simple solutions that make a difference. My motto is “Make work productive and enjoyable for myself and others, especially by means of automation.”
Many teams working with microservices need confidence they don't break functionality when making changes. System integration tests, functional tests, and sometimes manual tests are older ways to obtain that confidence. These processes may take more than 1 day or even more if different teams or a different company own the services.
 
To ensure the same level of confidence and speed up delivery, we can create Contracts for integrations between consumers and providers. Contracts created by consumer services need to pass with every build going in production to guarantee the integrations between systems/services work fine. Checking these contracts in a CI/CD pipeline makes feedback loops even faster.
 
A Contract is a collection of agreements between a Consumer and a Provider that describes the interactions that can take place between them. Consumer Driven Contracts (CDCs) is a pattern that drives the development of the Providers from its Consumer's point of view. It is TDD for microservices.
 
This workshop covers an end to end demo of contract testing between two microservices to show how to release microservices with confidence, get early feedback, speed up delivery, and comparison with other testing strategies.
 
I am going to use the PACT (https://docs.pact.io/) tool for implementing Consumer-Driven Contract Tests. The workshop will include an exercise from participants to implement the CDC.
 
 
Happy CDC!

Outline/Structure of the Workshop
 
  1. Introduction to microservices common testing strategies with their pitfalls
  2. Introduction to Consumer-Driven Contract Tests
  3. How CDC helps in speeding up the Continuous Delivery
  4. Introduction to PACT https://docs.pact.io (CDC tool)
  5. Exercise implementing CDC for two microservices integration.
  6. Exercise implementing test & executing it against microservices
  7. Putting CDC in CI/CD workflow.
  8. Q&A

Takeaways

  1. New testing strategy to fasten continues delivery
  2. Understanding multiple testing approaches
  3. Deploying services autonomously with confidence
  4. Understanding contract testing with example & exercise
  5. Introduction to PACT tool
  6. Automating service dependencies
  7. Hands on experience of writing contract test
  8. Putting CDC in CI/CD workflow
 
Gopinath Langote
I am a Software Engineer with good experience in designing and implementing complex web and mobile applications with microservices architecture. Open source enthusiastic, creator of https://github.com/gopinath-langote/1build, contributed to the JUnit5 framework and Java Design Patterns. Public conference speaker (spoken at Agile India, Agile Tour Vienna, VodQA Thoughtworks) I am currently working as a software engineer at @N26 GmbH, Germany. I previously worked with @ThoughtWorks as a consultant (Application Developer). Following topics that I keep interested in - Functional Programming - System design & scaling - Microservices architecture - Contract Testing, Test Driven Development - Agile practices, Pair Programming - Java, Kotlin, Scala, Spring Framework

Part 1: What is your approach to testing?
Section 1 running time: Approx. 1.5hrs

This hands on session will be centred around helping teams understand, talk about and solve problems in their processes and ways of working. The aim of this session is to get the mental models we have in our heads onto paper so we can start discussing what’s really important to us and our teams.

This game introduces the idea of systems thinking to the delegates using some simple visulisation techniques. Rather than the usual test strategy document detailing what their testing approach is they will draw it.

Why draw? Because visual language is the oldest and most transportable form of communication. If you can get people's mental models to match yours then not only do you build a common understanding but also a solid foundation to start your discussions on what your test strategy is.

How do you align people’s mental models of a process? By getting them to first visualise it and then combining them to create an overall team model. This causes the participants to include the ideas of other models into their mental model creating a team understanding of that process.

Break: 15 minutes

Part 2: How to run modelling session?
Running time: Approx. 1hr

The second part of the workshop is helping the participants to facilitate their own visualisation session with their teams. We will go into the mechanics of how the session works, the theory behind it and how they actually run a session.

Part 3: The Sell - How do you get the time to run the mapping session?
Running time: Approx. 30 minutes

You're sold on the idea of visualising your process, you know how to run a session but how do you convince not just your boss but your whole team to take part? First what part of your system is going to benefit from visualising? This is your leverage. Then identify your support and holdouts.

We'll work on getting your support on side and using them along with your leverage to get the holdouts onboard. All while using the skills you've learned from how to run a modelling session.

Takeaways

  • How to use visualisation to highlight the testing you do
  • Introduction to the ideas of systems thinking
  • New ways to talk about testing that engage other disciplines  
  • How to facilitate interactive sessions 
  • How to collaboratively create systems models 
 
Jitesh Gosai
Over the course of the last 15 years as a Test professional I've strived to help the teams I've worked with be the best they can. I've seen first hand what does and doesn't work in improving quality of our products across the software industry. I now want to take these experiences and help others make their teams be the best they can by improving quality through testability.
Afternoon Sessions | 1:30pm - 5:30pm
Security testing seems to be viewed as an extremely complicated area where only experts can contribute. In this workshop, we’ll demonstrate that in truth, there’s plenty of things you can do being an expert in security testing.

We have worked in a number of teams where security testing was seen as something you buy as a service from an external vendor and then you try to make sense of the report and hopefully you figure out what to change. After reading a number of those reports, we realized that not only did the same issues keep coming back; they were also things we should be able to check for ourselves on a regular basis instead of paying top dollars for someone else to do it once every year. By introducing just a few new checks into the regular testing of most web applications, we can gain confidence in ourselves and the security of our systems. Bringing in a security expert is of course still valuable, but now we can let them focus on the trickier stuff.

The OWASP Juice Shop is an intentionally insecure web application, as an exercise and training environment for quality engineers and developers of all skill levels. In this workshop, we will use it as our lab environment as we go over the current OWASP Top 10 list of web application risks. We’ll guide you through some handy tricks and tools for solving some of the Juice Shop challenges and reflect on how this can be used in your everyday situations. The focus will be on “low-hanging fruit”, i.e. things that can be done quickly and are easily applied regardless of situation. Hopefully this will leave you with a lot of new ideas, a hunger for learning more and an itch to solve all the challenges of the Juice Shop!

The format will be a Capture the flag-event where you will be trying out some of the practices, getting you started on a continuous learning journey that hopefully can keep going for years.

Takeaways

  • Things you can introduce into you regular testing process today
  • Introduction to  security testing for web
  • You don’t have to be an expert to start!
  • Ideas on how to delve deeper once you get comfortable with the basics
 
Lena Wiberg
Lena has been in the IT-industry since 1999 when she got her first job as a developer. Testing and requirements have always been a part of her job but in 2009 she decided to take the step into testing full-time and she has never looked back since. Lena has worked as a single tester, test lead, test manager, senior test manager, test strategist and manager. She is also involved with the software testing education in Sweden, both as chairman for one of the schools and by mentoring interns to give them the best internship possible. Lena lives in a big house filled with gaming stuff, books, sewing machines and fabric. Gaming is a big thing for everyone in the family and something she loves talking about. Biggest achievement: the Dance Dance Revolution-machine taking up half of her living room-space.
The new accessibility regulations - ‘The Public Sector Bodies (Websites and Mobile Applications) (No.2) Accessibility Regulations 2018’ are now law in the UK. Every new public sector website and app will need to meet defined UK and European accessibility standards. Existing websites will have until 2020 to confirm or build in these capabilities.

This means there will be a lot of demand for testers with accessibility testing skills in the coming years. 
 
Over the last 8 years, I have worked with a lot of companies helping them to build accessibility into their design or retro-fit it into existing websites.
 
I saw a lot of the same challenges in a lot of places – complexity and over-simplification.

The Web Content Accessibility Guidelines (WCAG) are great but they are complex, written in a domain-specific language and there are a lot of them to get through. As a result, myths abound about accessibility design and testing being a simple thing. I’ve heard some memorable ones over the years:
 “It’s all about contrast”
 “Having headings means your site is accessible”
 “Accessibility testing is all about edge cases”
 
The people who held these beliefs were not looking to do the wrong thing but as with a lot of things in IT, a myth will run around the world before the reality can start to set in.
 
With this in mind, I worked with my colleagues to design an accessibility workshop to run for our customers. As a part of that, I designed accessibility postcards based on Beren Van Daele’s risk storming cards. (Taking care to discuss the idea with him and making sure to give him full credit). 
 
The cards have a plain English explanation of different accessibility ideas as a gentle introduction into a very complex area. They needed to be postcards as explaining accessibility considerations in plain English was going to be a quite wordy task. I needed a bigger canvas that would still be portable.

 
How this will be delivered
Using a mix of interactive exercises, some clear examples of what being inaccessible looks like and a plain English interpretation of the WCAG 2.1 guidelines, this workshop will show you how to consider accessibility close to the code and how to audit what you need to do retrospectively for an existing product.

Takeaways

1) An understanding of what accessibility is  
2) An understanding of who benefits from accessible design (it's more of society than you realise)
3) Learn about bad things that happen when accessibility is not considered (and how to avoid them)
4) Understand what the WCAG accessibility guidelines are and how to use them in design and testing
5) Develop the skills to carry out an audit for accessibility on your own publicly facing websites
6) How to use a tool like axe and understand the results you see
7) A set of accessibility postcards for attendees to take back to their companies to help them generate accessible design and test ideas.
Elizabeth Fiennes

Elizabeth has been in testing and QA since 1998. Yes, they had computers way back then. No, my first tablet was not made out of stone :)

Since then, she has taken some time off for two people shaped development projects of her own. She describes herself as “cat slave” to a very large and opinionated Tuxedo Tom who likes walking across keyboards and spilling tea.

Doing talks and writing blogs were not something she was comfortable with so she challenged herself to start doing them in 2018. One of the happiest results of this experiment was making new wonderful friends which is one of the best outcomes of breaking out of any comfort zone.


We'll test and explore software and while doing that, we will explain what we are doing and why!
 
Much has been said and written about exploratory testing. Unfortunately, it appears that exploratory testing is still often misunderstood. Some claim that exploratory testing is unstructured and ad hoc. Just playing or cruising through the software clicking stuff without a plan. I think that is because it might look that way. Excellent testing done by a real professional looks easy from the outside. But what is really going on?

A testopsy—to use a word coined by James Bach—is an examination of testing work, performed by watching a testing session in action and evaluating it with the goal of sharpening observation and analysis of testing work. Testopsies can help in training, assessment, and developing testing skill for novices and experienced testers alike.
 
In this workshop we will test software and while doing that, we will explain what we are doing and why. We will also give insight in the techniques, skills and tactics that are being used. There is a lot going on when we test and by narrating and framing what is going on, we create understanding. By knowing what skills and tactics we use, it helps us focus on the right things and it enables learning them. You will be engaged in this workshop by first creating a checklist of tactics, skills and techniques. Then use this checklist to observe and try to figure out what is really going on while testing. You get the chance to work in groups or pairs and observe each other while testing, using and improving your checklists.

Takeaways

  • Learn what is really going on while testing
  • Learn to talk better about testing
  • Assess and develop testing skills
Huib Schoots
Huib Schoots is a coach, consultant, tester and people lover. He shares his passion for software development and testing through coaching, training, and giving presentations on a variety of agile and test subjects. Huib believes that working together in the workplace ultimately makes the difference in software development. He, therefore, helps people in teams to do what they are good at: by empowering them and help them continuously improve teamwork. Curious and passionate, he is an agile coach and an exploratory and context-driven tester who attempts to read everything ever published on software development, testing and agile. Huib maintains a blog on magnifiant.com and tweets as @huibschoots. He works for de Agile Testers: an awesome place where passionate and truly agile people try to make the world a better place. He has a huge passion for music and plays trombone in a brass band.
Do you want to make your testing easier, faster and more reliable? Have you heard of mocking, want to use it, but you don't know where to begin? Do you think your unit tests are written in the most effective way they could be? Either if you are a developer or a tester who wants to influence testability, unit tests are a key piece of the test automation pyramid. Being able to understand their value and write them effectively is key for a balanced test process. 

This workshop will cover test doubles (fakes, dummies, stub, mocks, spies) and dive into how to effectively use them in your day-to-day work. 

Unit tests will be our main focus, although we'll also see how this concept can be applied to other types of automated tests.

Language used is Java
Test Double framework is Mockito.

Takeaways

  • What are test doubles and why do we need them.
  • What test doubles exist? Hands-on: i) learn about fakes and dummies, ii) learn how to stub, iii) learn how to mock, iv) learn how to spy.
  • How can we use mocks in other types of tests?
Ana Maria del Carmen Garcia Oterino
I'm a developer passionated about Software Quality and a coach, specialised on Emotional Intelligence and NLP. My personal motto is that software is written by people for people, and that's where both of my passions merge. I love to teach and coach others to develop better quality software. On my day to day basis, I use Software Engineering and coaching skills, to identify, influence and implement technical changes in companies across all the stages of software development (from requirements, to code, architecture, design, processes, deployment, testing, operations...). In the last couple of years, my work has taken the shape of implementing Continuous Delivery or Deployment, with all the technical and culture challenges that this implies.

Is your team puzzling over how to feel confident releasing to production frequently with continuous delivery (CD)? Delivering reliable and valuable software frequently, at a sustainable pace (to paraphrase Elisabeth Hendrickson), is a worthy goal. DevOps is a hot buzzword, but many teams struggle with how to fit testing in. Everyone talks about building a quality culture,  but how does that work?

In this hands-on workshop, participants will have a chance to practice techniques that can help teams feel confident releasing more frequently. You’ll practice using frameworks and conversation starters together with your team to discuss what questions each step in your delivery pipeline needs to answer, and to understand the value each step provides. All materials used are freely available online so participants can try them with their own teams.

You’ll learn the language of DevOps so you can collaborate with all delivery team members to grow your DevOps culture and infrastructure. You'll work in small groups to come up with new experiments to overcome problems like how to complete manual testing activities and still do CD, how to shorten feedback cycles, how make sure all essential types of testing are done continually, and how to fit testing into the continuous world by engaging the whole team. You’ll learn that there IS a “test” in “DevOps”.

Whether​ ​your​ ​tests​ ​take​ ​minutes​ ​or​ ​days,​ ​and​ ​whether​ ​your​ ​deploys​ ​happen​ ​hourly​ ​or​ ​quarterly, you’ll​ ​discover​ ​benefits.​ The tutorial will include an overview of techniques for "testing in production". ​You’ll​ ​participate​ ​in​ ​a​ ​simulation​ ​to​ ​visualize​ ​your​ ​team’s​ ​current​ ​path​ ​to production​ ​and​ ​uncover​ ​risks​ ​to​ ​both​ ​your​ ​product​ ​and​ ​your​ ​deployment​ ​process.​ ​No​ ​laptops required,​ ​just​ ​bring​ ​your​ ​curiosity.

Takeaways

  • Continuous delivery concepts at a high level, and the differences between continuous integration and continuous delivery
  • Common terminology and a generic question list to engage with pipelines as a practice within your team
  • How to use the Test Suite Canvas to design a pipeline that gives your team confidence to release frequently
  • Experience in analyzing pipelines from different perspectives to create a layered diagram of feedback loops, risks mitigated, and questions answered
  • Ways your team can design experiments to address the many challenges of testing in a continuous world
Ashley Hunsberger

Ashley is the Director of Release Engineering at Blackboard, Inc. a leading provider of educational technology, where she leads efforts to enable teams throughout the organisation get to production as fast as possible, with as high a quality as possible. She shares her experiences in testing and engineering productivity through writing and speaking around the world.

A proponent of open source, Ashley believes in giving back to the software community and serves as a member of the Selenium Project Steering Committee and co-chair of the Selenium Conference.


Janet Gregory

Janet Gregory is an agile testing coach and process consultant with DragonFire Inc. She is the co-author with Lisa Crispin of Agile Testing: A Practical Guide for Testers and Agile Teams (Addison-Wesley, 2009), and More Agile Testing: Learning Journeys for the Whole Team (Addison-Wesley 2014), the Live Lessons Agile Testing Essentials video course, and “Agile Testing for the Whole Team” 3-day training course.

Janet specializes in showing agile teams how testing practices are necessary to develop good quality products. She works with teams to transition to agile development and teaches agile testing courses worldwide. She contributes articles to publications and enjoys sharing her experiences at conferences and user group meetings around the world. For more about Janet’s work and her blog, visit www.janetgregory.ca or www.agiletester.ca.


Friday, 27th March 2020

Conference

In my search for good testers to add to my team, I had a lot of grand ideas about what the team of my dreams would look like and I just KNEW curiosity was the key. 
So I went looking for information on how to find curiosity and...

WOW, was I wrong.

My research on curiosity and it's corresponding traits led me to:
  • A public lesson about "fit" and "passion" and why It doesn't matter what my motive was if it makes the underrepresented feel uncomfortable.
  • The discovery that the interview process itself makes curiosity only evident in the very privileged or exceptionally brave.
  • Expecting to see real curiosity in action bends toward ruling out the talented among the anxious, depressed, visible minorities, new immigrants, autistic...

Then - to my horror - the more I learned, the more I realized my standards were biased, from CV all the way through technical interviews.

Is there evidence of traits that lean toward Modern Testing that we can find in their past experiences? In their stories? How can we create a micro-culture of safety and vulnerability in our interviews and hiring process that allows candidates to shine bright enough to be seen?

Takeaways

  • The courage to question our own practices and fairness
  • The ability to see potential in those they may not have self-promotion skills
  • Be a hiring leader in your company
  • Expand culture in your team and company
Meaghan Thompson
I'm Meaghan, a QA Manager in Canada. I love startups and all the challenges that come with that career choice. I live in Calgary, Canada with my partner, Trevor, three kids and two frogs. I enjoy cooking, hiking and I'm completely failing at learning to knit!
I come from a household of high achievers. Our parents never asked to be awesome in what we studied. They always had this unsaid expectation. Luckily, I did really well throughout school, college and university. I like to think I got my dad’s sense of achieving and my mum’s great way of organising her tasks. From my teens onwards, I started developing a high sense of perfection for every-thing I took on. However, I did not realise that how having this high bar was slowly consuming me.

I like to think that choosing a profession in quality assurance was a result of wanting to use perfect products. So what better way than to participate in ensuring quality for them. However, I found that trying to achieve this awesomeness slowly engulfed me. The need to be constantly producing high quality deliverables added stress to my life and slowly started to cause burnout. I will talk about how I created a mini Personal Development Life Cycle and how I used my learnings from being in agile teams to being Agile about this PDLC.

Takeaways

  • creating a better balance in an agile team
  • watching out for signs indicating burnout
  • great products are important but so are you
Maryam Umar
I work in London as Head of QA of a Fintech firm. I started my career thirteen years ago as a QA test engineer in the finance and mobile industry. After transitioning to the eCommerce sector, I performed QA in various capacities for online restaurant and travel services. I continue to work in QA as a manager now with special focus on sustainable delivery practices. In addition to this, I have been a keen advocate of creating and sustaining diverse teams. As I have transitioned in my career, I have found that creating teams which work well together is more challenging than the actual project(s) to be delivered by the team. I pay special attention to team dynamics and ensuring engineers are in roles which give them a sense of purpose. I have also been speaking at schools and universities to educate students about what the industry has to offer and what a creative space the technology sector can be.
In the recent years I have been working on projects with no dedicated testers but plenty of testing. The testing has primarily been performed by subject matter experts. This is where it gets interesting, as my role on these projects has been to lead the testing being performed by people that have limited experience in testing. They also have no desire to be testing specialists, after all they are already specialists in their own subjects, however, everyone agrees and insist that the testing needs doing. So how do we ensure that the testing being done is done well? 
 
After having worked on several very different projects, yet still with subject matter experts doing the testing, I have been able to get both the public process clerks and the technology specialists to perform excellent testing. This talk is about the approaches that I have found work well: 
 
  • One of the approaches is for me to prepare the test cases and prepare them only as headlines. Sometimes preparing the tests as open questions helps too. 
  • Another approach is to lead them as if they are doing the project participation voluntarily. They probably are, but still it helps to respect where they are coming from.
 
The lessons though (good and bad) is relevant to many testers in other situations, especially being the only “tester” on the team. The story applies equally to developers and business end users doing most of the testing and you will have them contributing with great testing in no time!

Takeaways

What you will know after the talk:
  • An understanding of how testing looks when done by subject matter experts
  • How to lead a testing activity with an appreciative and motivating style
  • Examples of how teams can do great testing without dedicated testers
Jesper Ottosen
Jesper usually leads testing of all kinds of IT - from application development to implementing commercial standard applications, deploying infrastructure and operational technologies and large transition programs.  Jesper works as a Senior Test Manager at NNIT A/S, an IT services company that provides IT consultancy and IT services to the Danish public sector, Danish companies and international life science customers.
Five years ago, Henry Marsh published a book - 'Do No Harm' - on what it's like to be a neurosurgeon. In testing, we don't (always!) work in life or death situations, but we do share a wide variety of experiences that Henry Marsh talked about.

If we're a good tester, we care about what we're testing. We consider it to be our 'patient' - we want to find out if there's anything wrong with it, and we want to fix it, and send it into the world fully functional, and in the best condition we can.

But we also know that's not always going to happen. We're going to have times when we can't work out what's wrong, when we miss the obvious - or not so obvious - problem. When not everything can be fixed. And sometimes we'll be handed something that no matter how hard you work, you're just not capable of saving it.

The psychology of testing is fascinating - how much do we care? How long do we remember the bugs we didn't find, or the projects we just couldn't save?

In this talk we'll compare the 2 jobs, compare stories from Marsh with tales of our own - even consider that even though our role may not be brain surgery - it still has its own merit in this technological world of ours.

Takeaways

  • It doesn't matter how good you are, or how much you know, there will always be days when you get it dreadfully wrong.
  • What are the consequences when you do get it wrong?
  • And when you do get it wrong, how do you put that behind you to try again the next time?
  • And is there any better feeling than when you do get it right - that fierce joy, the warm glow, that sense of self-worth when you realise you've saved the day?
Kevin Harris

I've been testing for 20 years in a variety of roles and a variety of companies/industries. I've been a Tester, a Senior Tester, a Senior Web Tester, a UAT Team Lead, a Developer, a Senior Test Team Lead, a Scrum Master, a Test Manager and a Release Manager. I've worked in start ups and multi-nationals, and I've worked in travel, government, telecoms, high-street, marketing and medical industries.

Being in testing so long, I have seen the positive changes to the role that have been brought about by Agile, so am passionate about speaking about these things.

Outside of work I'm a cinephile and bibliophile, and write comedy in whatever spare time I manage to find.


Have you ever looked at a failing automated test and asked yourself... should I just delete it? I've asked myself this question numerous times, I've trained myself to do so, because I understand its importance. However, I'm often left baffled by colleagues and other testers who reject the idea of deleting a test. Why do they find it such a scary concept?

So the result is that we continuously look and review the same tests, not knowing if they’re mitigating any risks, or worse, not knowing what they are testing anymore. But if you're like me, you've probably looked at some failing tests before that left you thinking "why does this even exist?!". That trigger is one you shouldn't ignore.

In this talk, I'm going to share my experiences of listening to this trigger, but more importantly, I'll explain the actions I take. You'll learn how to analyze the full lifecycle of an automated test to truly understand its value. We'll talk about the total cost of ownership of a test, and how it is a key analysis factor when attempting to reduce feedback loops. I’ll bring some stories from the company I work for to support this. After all, we’ve gone from automated regression environments running thousands of tests overnight to a CI/CD reality. 

I love my delete key, I hope to share the love!

Takeaways

  • Understand all the costs of a test, from its development to when it’s run and maintained.
  • Learn key criteria to apply when deciding to delete a test or refactor it.
  • Acknowledge that as the number of automated tests grows for your system, the more you need to prioritize and understand which are the most valuable ones.
 
João Rosa Proença
João Proença comes from Lisbon, Portugal, and is Quality Owner in R&D for OutSystems, a company that provides one of the leading low-code development platforms in the world. He has assumed various roles throughout his career in the past 11 years, including quality assurance, development, customer support and marketing. Finding innovative solutions for difficult problems is what drives him the most, so he is always eager to talk about how professionals are overcoming testing challenges around the world. Outside of IT, João is passionate about songwriting, movies and football. You’ll see him tweet about all of these topics using the @jrosaproenca handle.
Believe it or not, leadership and parenting are very similar. I’m not sure if it was while getting my daughter to stop cutting the cat’s whiskers while it slept, or attempting to get a junior team member to engage in learning a new skill that I noticed a crossover in many of the techniques I was using from five years as a parent and 20 years as a leader. However, once I started to pay attention I noticed similarities between parenting and leadership everywhere! It was then I started to actively apply some parenting techniques to my work life and vice versa in areas such as communication, personnel management, team engagement, and personal mindset. The results were quite positive for my team and my family. Although, I haven't managed to get the family to use a Kanban board yet! 
 
This talk is about my learnings from using leadership techniques with my children and using parenting techniques with my teams.

Takeaways

I will share my learnings and stories around:
  • Growth mindset and how using this can generate grit and determination in a team 
  • Communication from the area of Gentle Parenting and how this can help strengthen understanding with team members
  • Gamification of tasks which help keep teams focused and engaged, even when the pressure is on
 
Shey Crompton

Pre-millennium Shey broke his testing teeth in computer games, he can be held responsible for such iconic successes as Catwoman, Malice and the little known Harry Potter…

Originally from Melbourne Shey’s career has been as varied as his Anglicised Australian vowel sounds.

Following a decade in Games Design, Project Management and Software Development he has spent the past 10 years working back in Testing - where his passion truly lies.

As a Consultant he's covered various Test roles including a long term placement working with JK Rowling’s Team at Pottermore.

Shey’s broad range of experience enables easy connections to be made with colleagues from all areas of the business spectrum. Shey can regularly be found providing the link or explanation between people from different disciplines.

Shey has a significant presence within the testing community, sharing his perspective on Twitter, LinkedIn, and MoT's The Club forum among others. In person he co-organises the London Tester Gathering and has volunteered TestBash.

While not the first time Shey has spoken at TestBash, this will be the first time speaking for longer than 99 seconds. This year’s appearance is slightly longer, but follows his usual theme of helping leaders to discover and develop their own leadership style.

When not talking about testing, Shey can be found talking about his next passion, craft beer. Follow him to find where good beer is being served.


I have been in love with testing. I have been a hands-on exploratory tester. So much that I never was fascinated to move out of Testing or look at Automation as an exciting alternate to what I was doing. I do understand Automation and its value to testing. I have no visible bias against anything that adds value to testing. However, it fascinated me when a lot of testers seem to put their faith on Automation as a way for them to build their future. I was wondering why are they moving out of testing. When I started to build products - I realized the joy of building things. I then could connect why some people loved Automation in Testing. People love building things.

If people love building things what happens to people who don’t know to code? A lot of testers who don’t know to code think they are stuck. Our industry scares these testers by fake narratives of Automation and AI replacing hands-on testing.

There are plenty of testing problems that have not been addressed. There is no one (I know) thinking of building a system that could compliment CDT approach to testing. There is no one (I know) thinking of building self reflection systems for testers. There is no one (I know) thinking that they could open source “thinking” instead of just code

With this as a premise, I took upon a mission to support testers I know or work with to help them become Product Owners. One way of helping them enjoy building things.

I have a few examples today of those who have successfully and partially transformed into Builder (Product Owners) while being a non-coder and I would like to share their stories. This talk could help open doors in testers minds. Create opportunities. Create ideas and help build support system.

The future of testing is when testers build tools that they needed but didn’t have. A small step towards that future is this talk.

Takeaways

  • Shaping the career as a non-coding tester
  • Becoming a builder of things in testing
  • Learning to be a Product Owner
Pradeep Soundararajan
Pradeep Soundararajan is the Founder CEO of Moolya Testing and Product Owner of Bugasura.io Pradeep is on a mission to build a software testing start-up that solves fundamental unaddressed pain points in testing. His approach to get there is by helping people build skills and helping people build tools. 16+ years of experience as a hands-on tester, independent test consultant and now a businessman. Moolya Testing is a CDT inspired testing services that have clients across the globe served by ~200 testers and counting. Pradeep values culture, ethics and professionalism more than money. Pradeep is who he is because of his team and family who let him take credit for their work. Pradeep publishes his thoughts on Quora and Linkedin: https://www.linkedin.com/in/testertested/

In this talk, I explain how integration and compliance testing is done at one of the largest software companies. You learn how large and complex software systems are developed and which implications this development style has for software testing. I show how a typical integration and compliance process looks like for a large-scale system and explain the challenges software testers face. Finally, I will highlight a novel approach that tackles some of the described testing challenges.

Here is a bit of a teaser:

Large-scale software systems are developed by several hundreds or even thousands of software engineers. Those engineers often work on a particular part, feature or piece of code in isolation by using the concept of development branches. The problem for software testing starts when code changes from several engineers are merged together.

Even though the code has been previously tested by the engineer, the new integration must be tested as well. Furthermore, large and complex software systems also require checking that changes do not impact other parts of the system by running integration tests, tests for backward compatibility, performance and security tests. Such test suites are time-consuming and expensive. They also require a fair amount of manual work. On the other hand, by design, they often do not find defects. Their main function is to be a safety net to ensure the software is compliant.

The problem of these test suites is that their execution times (which can be several hours for large software systems) contrast with the need for software companies to shorten release cycles.

In this talk, I highlight a novel approach, developed at one of the largest software companies, that helps to reduce test execution times and manual interventions without sacrificing software quality.

Takeaways

After this talk participants will know:
  • how large-scale software systems are developed
  • and more importantly, how such systems are tested in practice.
  • They will have a good understanding how integration and compliance testing is done at one of the largest software companies,
  • and how test suite execution times and frequent manual intervention can impact release cycles.
  • Finally, participants learn a novel approach to tackle some of the problems discussed.
Michaela Greiler

Michaela is a software engineer at Microsoft responsible for improving the software development lifecycle of teams such as Office, Windows, or Visual Studio. She is an expert in helping teams boost their code review, testing and deployment practices. Her main passion is to make software development more enjoyable and productive for developers. To learn more about software development practices all around the world, she started the Software Engineering Unlocked Podcast (https://www.se-unlocked.com).

Before joining Microsoft, she researched how large-scale software systems can be better tested and understood using static and dynamic reverse engineering techniques. Michaela holds a PhD degree in Software Engineering, and a Masters and a Bachelors degree in Computer Science. She loves to share her knowledge and experience by writing about it on her blog and for (scientific) publications.


We strive for quality. We want to use high-quality products and, as testers, we want to help build high-quality products... But what does "quality" even mean?

There are so many different, conflicting definitions of the term within the software industry that it’s becoming an endless debate across the software communities. And this appears to be clouding our ability in being able to measure quality in a successful way. Think about your own experiences here - how do you currently measure quality? Test coverage? Bug counts? Code coverage? What do these things actually tell you about the quality of your product?

In this talk, I'll be presenting a new model for thinking about software quality from 8 different perspectives. Keep an open mind as I present the idea that quality can be seen as a scale of "goodness" rather than a measure of "correctness". And that although "goodness" is subjective, relative and personal, it is definitely measurable and is actually essential when thinking about product and business decision-making activities. All while sharing some personal stories relating to using software and building software with quality in mind.

In line with the 8 perspectives of quality, I'll also talk about many different testing activities, such as: exploring a product, exploring an idea, and how they relate to the perspectives of quality. As well as the roles and responsibilities surround these different testing activities from the perspective of your development team and the other teams within your organisation.

Finally, I'll also present a model for being able to measure the maturity of your team regarding how they think about testing, and what small changes to make in order to get better at building a culture of quality within the team.

Takeaways

  • Identifying 8 perspectives of quality
  • Explaining different testing activities relating to each perspective of quality
  • Understanding the roles and responsibilities surrounding the 8 perspectives
  • Demonstrating the difference between "correctness" and "goodness" and how to measure quality from these views
  • Visualising a quality maturity model to use with your team
  • Recalling relatable stories of experiences and journeys in building and influencing quality.
Dan Ashby
Dan is a software tester and he likes Porridge! (and whisky!)

When organization transitions from using phased and gated methods to using agile, general training is usually provided for the teams, but very little information is provided to leadership about what that means for testing - to the team or to the organization.

Teams often suffer the consequences because the leaders don’t understand their role. In this talk, Janet shares her experiences in what leaders need to know about testing in agile to help their teams succeed and also how teams and individuals can help influence and share that message with their leaders.

For example, testers may struggle because they are treated as “plug and play” resources, assigned to more than one team while being expected to be equally valuable to all the teams. This sets them and their team up for failure since they cannot be in two places at once.

Transitioning to agile is more than a process change, it is a mindset change – not only for the delivery teams but for everyone in the organization, especially the leaders who are expected to set the vision for the company. Learn how to influence your leaders as they shift their mindset to building quality in.

Takeaways

  • What leaders should know to help teams build quality into their products
  • Ideas about team members can influence leaders
  • Ideas about how individuals can influence their team
Janet Gregory

Janet Gregory is an agile testing coach and process consultant with DragonFire Inc. She is the co-author with Lisa Crispin of Agile Testing: A Practical Guide for Testers and Agile Teams (Addison-Wesley, 2009), and More Agile Testing: Learning Journeys for the Whole Team (Addison-Wesley 2014), the Live Lessons Agile Testing Essentials video course, and “Agile Testing for the Whole Team” 3-day training course.

Janet specializes in showing agile teams how testing practices are necessary to develop good quality products. She works with teams to transition to agile development and teaches agile testing courses worldwide. She contributes articles to publications and enjoys sharing her experiences at conferences and user group meetings around the world. For more about Janet’s work and her blog, visit www.janetgregory.ca or www.agiletester.ca.