WHEN

-

WHAT

TestBash

WHERE

Manchester, GB

TestBash UK, our largest in-person software testing conference takes the stage at The Lowry Theatre, in Manchester, UK, starting on the 22nd of September 2022.

We will have two days packed with all the things we've been missing: networking, activities, challenges, games and plenty of other opportunities to learn and meet your fellow community members in person!

Whilst we plan all the finer details, you can start getting involved by buying a ticket to guarantee your space; sharing it on social media, with your colleagues and community members or even speaking with your company to sponsor.

The lineup has been announced, so keep scrolling down and see what's happening at TestBash UK 2022👇

 

Purchase a ticket to attend this event.

Become A Sponsor

Want to see your company here?
Check what sponsorship packages are available, download the brochure here.

Meet Your Hosts

Meet Your Speakers

The times below are for

Main Stage
In our main stage area we’ll have Talks, Discussion Panels and our famous 99-Second Talks.

Experiences and lessons learned from working with developing software since 1999 resulted in the “Would Heu-risk it?” concept. It all started with a workshop with Lisa Crispin but has since evolved into a card deck, blog posts, articles and a book draft in 2020.

“Would Heu-risk it?” is centered around risk analysis, heuristics, patterns/anti-patterns that affect us in designing, building, testing and running software. They are grouped into three distinct categories: Traps I see testers fall into, Tools I see testers use as super-powers and weapons testers could use to focus their work (also known as common weak-spots in building software).

This talk will be a compilation of my main learnings from 20+ years of building software, seeing the complexity from a developer, a tester, a test lead and a manager perspective.

We will use the 30 cards as a focal point to try and focus on what I believe are my most important learnings.

Takeaways

  • Testing heuristics as super-powers
  • The importance of bug advocacy
  • Up-skilling yourself
  • Involving yourself in the entire process to shift left, right, up and down
  • Different perspectives to software quality

Imagine a world where testing was properly understood and invited throughout the full lifecycle of building software. A world where quality was not just perceived as “correctness” based on the software meeting the requirements. Imagine the state of the world if this was the case, with people relying more on complex software every day.

People expect high quality software, with quality being an accumulation of the goodness of the experience in using the software, the correctness in its operability, and the value they obtain from the software. In order to evolve the craft of testing to assess quality in line with this, we need to break out of the mindset that testing only relates to assessing the software.

In this talk, I will explain my view on testing and it’s purpose of assessing quality, in line with my views of quality being beyond the correctness of the software. Additionally, I will illustrate the value in expanding testing activities throughout the entire software development life cycle:

  • That testing ideas can prevent wasteful development and uncover risks.
  • That testing designs and requirement artefacts can discover ambiguities and risks and prevent assumptions and waste.
  • That testing architecture and code design can be the fastest possible feedback loop (much faster than test automation) on the quality of the code.
  • That much of the output of all of the above testing can dramatically improve the quality of the testing we do when we test the software (improving both scripted and exploratory testing), from us using the risks we discover to better structure our testing.

There will be examples, there will be models, and there will be stories which I hope to excite and inspire you to reflect on and evolve your own testing towards that imagined world of really showing the true value of testing from assessing quality throughout the entire development lifecycle.

Takeaways

  • Investigate ideas, requirement artefacts, designs, and processes as part of the SDLC
  • Discuss the testing feedback loops across each activity
  • Understand what activities are involved within an SDLC

Where to start and where to stop with test automation?

I will share in my talk practical steps for how to start defining a test automation strategy. By the end of the session, we should be defining a strategy together. The talk is suitable for both individual contributors and managers.

Takeaways

  • Practical steps for setting up a test automation strategy
  • Technique for setting up a vision and a mission for test automation
  • The role of Test Automation Analytics in the success of your strategy
As a tester, we talk to a lot of people, we tell people a lot of things, but they don't always listen. 

This talk will look at how powerful stories can be when trying to get our point across, whether that be raising bugs, talking about challenges we're facing or even talking about the testing that we're doing.  It will call on times when I've done this well, and when I've done this not so well and the effects of that. 

It will talk about how to structure stories, how to get people to buy into the story, how to tell a story short and succinct (think an elevator pitch) and how non-verbal communication can help amplify the story's reach and influence. 

Takeaways

  • An understanding of story fundamentals - How telling good stories can help you influence people and get what you want
  • How to apply that to testing and everyday life - In terms of telling people about bugs, or about challenges you may be facing to help them understand
  • How to make sure you get the important points across in the story - We've all been there when someone tells a story that goes on, and on, and on, and on... anyway, you get the point, we'll look at how to break down a story to get the key points across in the story
  • How to amplify your story telling through your non verbal communication - When telling a story, only so much is said through what is actually said, you can achieve so much more from a story when you factor in other aspects of communication

How simple habits can develop, drive and support a learning culture for you and your team.

Join Alex Reynolds as he brings to life the teachings of Charles Dhugg and James Clear, and how he used them to create his own learning Habit.

At the session, you will discover why we are indeed 'creatures of habit' and how you can use this knowledge to introduce a learning habit for yourself and your team. 

Takeaways

  • Learn how habits are formed and why we favour habits (yes even the bad ones) over anything else
  • Develop a greater understanding of your own habits and how to use this to create new ones
  • The problems with setting goals and aiming too big
  • How to apply the laws of Habit formation to create and maintain a learning habit
I do not have any accessibility needs when it comes to using computers. However, for about 2 years, I was at one point reliant on a wheelchair or crutches to travel. This helped me develop an appreciation of how the smallest thing could impact someones quality of life. 

Accessibility testing is often an afterthought. This is not surprising as, unless someone has first hand experience of a disability, they will not be able to fully understand what design issues could limit someones ability to use a website. 

Before accessibility testing, we should first at least try to understand the various accessibility needs a user might have. We should then attempt to use the application the way a user might use it, such as using screen readers or checking the contract settings on a website. Most of the tools a user might use are freely available in chrome extensions. 

In this session, I will demonstrate the various tools available that can be used to support accessibility testing. As well as this, we will also analyse the tools that a user might use to access a website. Using these tools, we will carry out a live exploratory accessibility testing session so that we can understand the challenges a user might face and use this to identify accessibility issues.

Takeaways

  • Develop an understanding of the various challenges people with accessibility needs face when attempting to use a website
  • Learn about the different tools that might be used by someone with accessibility needs so they can access a website
  • Run an exploratory testing session on a website using accessibility tools and tools that might be used by users to identify potential accessibility issues that might be preventing someone from using a website
Ability to think deeper is one of the most valuable skills that every tester needs, and yet it is rarely taught in universities and even in workplaces. In today's world, problems are becoming more complex with the addition of new technologies, tools and approaches. To deal with these challenges and remain competitive we should start to think about thinking and build a framework that helps us to face any testing challenges thoughtfully and require a new toolset or framework for thinking. At its core, it must be a framework that helps with problem-solving & provides a structure for our solutioning process. For this, learning and understanding how to spot gaps in our thinking process play a significant role.

As explained by Daniel Kahneman in the book "Thinking, Fast and Slow", our brains have two thinking systems, 'System 1' which is faster and intuitive and the slower and contemplative 'System 2'. The interaction of the two systems often helps us to get things right or fail at times. Understanding the way we use these systems to think helps us in better decision making and problem-solving. Connecting all the dots around thinking, I have figured out some hidden logic that we still need to explore and analyse. This talk will get you thinking about how you naturally think and unleash its full potential to be a skilled tester by leveraging those hidden logic & approaches.

Takeaways

  • Explains the importance/role of different thinking types in testers
  • Improve your ability to think, analyse and interpret using those thinking types
  • Guides how to generate a unique art of thinking(Incorporate thinking with metacognitive skills) for testers
  • Learn to spot "gaps" in the thinking process
  • Role of design thinking & Empathic thinking in the craft of testing
How is your automation journey going? Are you a beginner or have you already started your journey, but are still struggling? In both cases, this talk is for you! I will share my own experiences and struggles, and you’ll learn how to bring your test automation to the next level.

Many people get into test automation without having a profound background in programming or without receiving any proper training. They have some idea about what tool to use, gathered some basic knowledge, and managed to create some automated tests with it.
At a certain point, you suspect that something is not quite right with your automation. Your code feels messy and maintaining it is hell, costs a lot of time and frustrates you.

This talk has you covered. You’ll learn about:
  • How to create a proper strategy for your test automation;
  • How to use object-oriented programming principles in your test automation;
  • How to recognize and eliminate code smells

I will present practical solutions to problems I see a lot of people are facing too. I encountered them too. You will walk away with practical advice you can use in your daily work. This will reduce the stress and pain of working, by significantly improving the quality of your automated tests and reducing maintenance efforts.

Note: The talk is not meant to cover all topics in-depth, but to make the audience aware that certain principles and concepts exist. Those will be presented & explained in what context they are useful, so the audience has a starting point for further studies.

Disclaimer: This talk is not about any specific tool or framework. The principles described are generic, and I've seen them work in different contexts.

 

Takeaways

  • Learn fundamental principles and strategies that can improve your test automation projects
  • Get guidance on where to start improving your test automation code and receive pointers for further studies to continue your test automation journey with less trouble
  • Learn about ways to make test automation code less messy and easier to maintain
Since joining an airline during a pandemic, myself and the QA teams have been on a journey.
From only being considered valuable to involved once development has been completed and PMs telling us they can't afford us to be involved earlier to now being seen as advocates of Quality and being at the table for discussions about any technical changes.
The transformation has been tough, but we are now reaping the rewards for all our hard work.

This talk will talk through how we embraced the culture shift during the pandemic and used it as a vehicle to embrace a more collaborative way of working, bringing people together to discuss Quality and Testing and show ways to improve how we provide feedback on the products through more inventive ways of testing, changing the way we test for the better.

I'll show how I used my passion for Quality and my external network to engage the QA teams across the company, empower and nurture the existing teams as well as bring new people in to disrupt the status quo and move us forward. Ultimately enabling me to advocate for the teams and raise the awareness of Testing/Quality across the organisation and enabling us to transform the way we work.

Did I mention our Test Parties? Find out more in the talk.
 

Takeaways

  • Raise the awareness of the testing teams and the work you do
  • Change the culture by building allies across the business who advocate for you
  • Make the day to day work more enjoyable with collaboration and a sense of fun
I will share how I introduced threat modelling to my team and how it is being used within my company.
Within the talk people will hear the basic theory of threat modelling and also look at how we ran sessions, handle the vulnerabilities found and how I convinced people to try it - using my Threat Agents card game.
Finally, I will make the case that not only is threat modelling for everyone on your team but as testers, we can be ideal people to get involved.

Takeaways

  • A basic understanding of threat modelling
  • An idea of what threat modelling might be like "in real life"
  • The knowledge that it is something they can do and not just for security experts

It's not a TestBash without 99-Second Talks!

The 99-Second Talks is the attendee's stage, an opportunity for you to come on stage and talk for, that's right, 99-Seconds.

You can talk about anything, a testing topic you want to share, a personal experience, an idea sparked by all the amazing talks you've just listened to... the stage is yours, for 99-Seconds!

Our amazing host Gwen Diagran, will introduce you on stage and start the clock. As soon as the time's up, a noise will be heard and that's it: time's up!

I'm going to talk about what changes we made to our test reports to change the perspective. During my 7 year career working for startups, charities and fashion retailers, I've seen many different styles of test reports!

Some benefits we gained from this, we got feedback earlier by inviting users to our sprint demos and also improved rollout of new features by including operations data.

By the end of my presentation, you will know how to add user based metrics such as user feedback comments to your test reports. I’ll walk through the process we used to come up with the user based metrics to include in our test reports.

Let me give you an example of what I mean.
 

What does a test report really tell you?


What do you mean the RAG status is amber?!
 
The most recent example is from a charity. I naively thought people looked beyond the red, amber, and green status, turns out if it wasn’t red the report wasn’t read by key decision makers.
 
I soon realised there was no clear understanding of what the risk was related to test cases executed and bugs found during development. It’s not the responsibility of the stakeholders to understand the intricacies of your test plan and testing process. 
 
Often qualitative metrics are more powerful in relation to risk and feelings of confidence.
 

How to create user driven reports
 

User data makes test reports more relatable for the readers. In my current role at a startup, I ran a workshop to understand from my testers what they think we should report on, this is what they came up with.
 
The user experience team performs user testing on designs, so we include their feedback in our test reports. The users worked on the requirements, so we included their feedback in our test reports.
 
Bugs that weren’t going to be fixed before release became modifications to requirements by adopting a zero bug policy. When the code gets deployed to production, the test report isn’t important anymore. So we put the production issues in our test report. 
 
All these changes meant the report was user centred and risk could be interpreted by the feedback and feeling of the users.
 
  1. Involve everyone who holds information to the user
  2. Use production data such as live issues, support requests, and user feedback surveys to outline how your bug fix impacts the users
  3. Remove testing related terms from your reports such as bugs and test cases.
 
Are you going to add more user related data to your test report?

Takeaways

  • Why stakeholders don't understand the risks of traditional test reports
  • Where you can get user data from for your reports
  • How you can remove bugs from reports with zero bug policy
APIs are an essential part of an increasingly large number of applications that we use daily. APIs enable applications to exchange data and functionality easily and securely. As testers, we want to ensure that our APIs do not break and provide the expected functionality. We can automate our APIs to speed up the rate at which our checks are done. When automating APIs having tests to ensure that your API returns the correct message and status is great, however, do you test for and automate the negative and edge cases for your APIs? In this talk, I will show you how to get started with automating APIs, a checklist of things needed to automate an API, automate negative tests for your API as well as check that your APIs handle errors appropriately, follow the specified schema, and don't reveal data that it shouldn’t or has certain security gaps. I will also share how to decide which tests you should be automating for your API and how to automate workflows for an API.
 
API automation executes faster than UI automated tests and I have found it to be more straightforward to write. Join me to start creating these quick automated tests using Supertest (a JavaScript API Testing Framework) for the boundaries of your API. The tips that I will share can be applied to any framework that you use to automate APIs.

Takeaways

  • Understand APIs
  • Negative scenarios to automate for APIs
  • Understanding and using schemas to validate your APIs
  • Testing APIs in Postman
  • Automate APIs in javascript
  • API workflows
  • API Testing Checklist

Have you ever heard of the term Performance Testing?

Do you get confused as to what the differences are among load testing, stress testing and soak testing?

Have you ever been asked to perform client side and server side performance testing but you’re unsure how to get started?

If you’ve answered yes to these questions then this talk is for you! As part of this talk, I will cover the following things:

  • Why do we need to test for performance?
  • The difference between client side and server side performance testing.
  • An overview of what metrics to consider when doing client side performance testing.
  • An overview of what metrics to consider when doing server side performance testing.
  • A quick glimpse on how to measure the performance of your favourite website using Google Lighthouse for client side performance and k6 for server side performance.

After this talk, you should be equipped with the knowledge and tools to use to get started with performance testing.

Takeaways

  • Understand the difference between client side and server side performance testing
  • Know the different metrics when it comes to performance testing
  • A quick overview on how to run performance test and analyse results from a Google Lighthouse report
  • A quick overview on how to use k6 Learn the technical skills to use k6 for back end performance testing
1.5 years ago I took over as QA Lead and after an external QA health check, my first task was to define a QA strategy document. Soon after my company underwent an agile transformation, making our Development teams truly agile. Throughout this process, we learnt a lot about team Ways of Working (WoW), communication and collaboration. 

This leads us on a journey of throwing a traditional top-down QA strategy document out the window and defining our QA Ways of Working.  
I'd love to share our experiences of how we went about breaking down all of our QA processes and building them from the ground up, as a team, playing to the strengths of all of our QAs (juniors to seniors) and how it has led us to make really positive changes about how we plan, execute and test work. We've got to the point now where we have no QA strategy document at my company and instead of a lovely visual mind map of our entire QA WoW.

P.s. We used the QA community to help us define our WoW and you very kindly featured one of our WoW improvements recently which helped a lot, thank you! 

Takeaways

  • How to define your own visual QA WoW
  • How building your WoW can help to identify and drive process and testing improvements
  • A whole team approach to QA strategies
  • Using your community and ingenuity to shake things up around traditional processes
This is just a fancy name for “Testers”, right?
Maybe, maybe not. Let’s talk about it and find out!
We’ve been leading, mentoring and coaching quality teams for over a decade.
They’ve even been known to do that work together over the years! During that time, they’ve seen attitudes to testing change and evolve.
One recent change is the increasing amount of “Quality Coach” and “Quality Engineer” roles appearing in companies.
Is this a fancy name for the same kind of “Tester” roles that exist already, or is there something significantly different between them?
We believe there are and will highlight the differences during the conference!

Takeaways

  • Define Quality Coaching and Quality Engineering
  • Compare the difference between quality coaching and quality engineering roles versus traditional testing roles
  • Contrast how their current approach differs from a quality coaching and quality engineering approach
  • Choose from more options to deal with quality and testing challenges that crop up in their work
“Observability”. A word that is quite popular in tech these days, but not always easy to understand what it means in the real world, right? “How is it not monitoring?”

I mean, we’re told that it’s about “unknown unknowns” and that you should be able to answer new questions about a system without having to ship any code. We’re also told that there are three pillars for observability, “logs, metrics and traces”, and that it’s much more than just monitoring. But how does that translate to real-life scenarios in a software tester’s world?

Well, one day my team was struggling with a flaky test in our CI/CD pipeline. The way we were able to unravel the mystery surrounding that test illustrates a few key observability concepts on the availability of data and the friction in accessing it.

In this talk I will tell you the story about that painful flaky test and how it showed us how (the lack of) observability could already be present in our daily lives, without us even realizing it! By the end of this tale, even if you know nothing about observability beforehand, you will understand a bit more. You’ll know some questions you should start asking about your own tests, and have some ideas that will let you find the answers quickly!

Takeaways

  • Learn what “unknown unknowns” actually look like in real life and how they relate to observability
  • Find out how observability can be your greatest ally when dealing with flaky tests
  • Acknowledge that you may have a lot of useful information scattered throughout your systems
  • Understand the importance of having data stored in one place and multiple views of that data
Who writes the acceptance criteria for your stories- Product owners, Business analysts or the entire team? If the Given-When-Then scenarios are already prewritten and presented to the team, their thought process is curtailed. It might also lead to preconceived ideas and notions. This also tips the responsibility towards a single stakeholder which can prove to be dangerous. What is the alternative then? 

Discover the Power of Example Mapping!

Example Mapping is a great way to motivate the team to adopt Behaviour Driven Development. The whole practice is about encouraging communication and collaboration between the various stakeholders. It is of prime importance in an agile setup that the Product, Dev and Tester share the same understanding and have equal partnership in the stories/features delivered. Example mapping facilitates these conversations and also goes hand in hand with the shift left approach. In a nutshell, it is a great team activity which results in substantial gains and increases in productivity. 
We will find out what Example Mapping is, how to do it and also practise it in an interactive workshop. 

Takeaways

  • Understand the importance of Example Mapping and how the participants can use it in their teams
  • Tips on convincing the entire team to try this approach
  • Practical implementation with use cases for all kinds of stories whether it is a user story or a non-user story
  • What more could be done with this mapping approach
  • Limitations of Example Mapping and when not to use it

Testing is a superpower don't keep it all to yourself!

I am a beginner quality coach and a self-confessed testing nerd. I started experimenting with quality coaching after reading the modern testing principles. I really love the ultimate goal of them which is to accelerate the team. In particular, they talk about expanding testing capabilities across your team as well as leading a quality culture. I had no previous knowledge and zero experience in this area but thought what's the worst that could happen?!

Being the only tester in the department I really liked the idea of getting the developers around me to help out with testing. Not only that but as I freed up time by stepping away from physically testing every single feature, I was able to become more of an advocate and evangelist of testing. I stopped physically testing and started talking about testing. To anyone who would listen!

So if you're asking how can I make the transition to quality coach? I'll share how little nudges towards a long term goal have worked well for me. Some lessons that I've learned so far include:

  • Don't wait for permission! Your job title doesn't need to say quality coach.
  • Find your allies. Locate the people who understand the need for testers.
  • Sell the benefits of this approach to your developers.
  • Make your testing work visible.

If like me, you're looking to experiment with coaching I'll talk about what techniques have worked in my team such as:

  • Talking about testing early. Give developers exposure to thinking about quality and testing before they write a line of code.
  • Asking developers about their testing. Continue the testing conversation during development!
  • Sharing testing knowledge with your developers. Pair testing has benefits for both parties.

I learned that knowledge isn't something you lose the more you share it. In fact, it's quite the opposite. I believe anyone can try this out and you'll discover testing nerds in your team that you didn't know existed!

Anyone can experiment with quality coaching as long as you have the right attitude. I hope to demonstrate that quality coaching isn't an area to be intimidated by and if you're truly passionate about helping your team improve it isn't as hard as you might think.

Takeaways

  • You will see my story of how I introduced the idea of quality coaching in my team as a complete novice
  • Learn about some simple experiments towards coaching that you can start tomorrow
  • How to talk about testing early and engage developers in your mission
  • How to make your testing visible to increase knowledge sharing

It's not a TestBash without 99-Second Talks!

The 99-Second Talks is the attendee's stage, an opportunity for you to come on stage and talk for, that's right, 99-Seconds.

You can talk about anything, a testing topic you want to share, a personal experience, an idea sparked by all the amazing talks you've just listened to... the stage is yours, for 99-Seconds!

Our amazing host Leigh Rathbone, will introduce you on stage and start the clock. As soon as the time's up, a noise will be heard and that's it: time's up!

99 Minute Workshops
We're bringing 99-minute workshops to the classroom. These workshops are short, focused and targeted.

More details about this workshop will be added soon.

This all sounds wonderful chaps but now what?!

The challenge with Quality Coaching and Engineering roles is that they rely on a lot of interpersonal skills. We’re talking about things like:

  • Active listening
  • Team building
  • Leadership

These are easy things to grasp conceptually but difficult to learn how to do during the day job! Ideally, we’d have really safe, consequence-free environments to practice in. Sadly, most of the time we’re in some kind of high-stakes “live rounds” situation where it feels risky to try these things.
This is where our workshop can come in!

Using the world famous safe MoT environment as a backdrop, we’d like to give people a taste of the coaching and facilitation skills that underpin the quality engineering work. By the end of the workshop, attendees will be able to:

  • See how their current approach differs from a quality coaching and quality engineering approach
  • Have more options at hand to deal with quality and testing challenges that crop up in their work
  • Experience using coaching skills like active listening and open questions
  • Use techniques like 1-2-4-All to get all the perspectives of the team out into the open

Takeaways

  • Experiment with coaching skills like active listening and open questions
  • Use techniques like 1-2-4-All to get all the perspectives of the team out into the open
  • Gather new perspectives on existing situations that they’re dealing with at work

Everybody likes playing! So I bring to your attention a workshop with cards game!
During the workshop participants will design, debate and pitch a test automation strategy using a cards game which will facilitate collaboration and knowledge sharing in a safe environment and hopefully the courage to challenge the test automation status quos.

The workshop is not addressing programming challenges but is touching the design and architectural thinking of a test automation solution.

Takeaways

  • Test automation solutions
  • How to approach different test automation challenges
  • How to design or kick-off a test automation strategy

More details about this workshop will be added soon.

Did you know that software testers don't just test software, but also test: ideas, requirement artefacts, design wireframes, architecture design, code, processes and services?

Takeaways

  • Investigate ideas, requirement artefacts, designs, and processes as part of the SDLC
  • Discuss the testing feedback loops across each activity
  • Understand what activities are involved within an SDLC

<!--td {border: 1px solid #ccc;}br {mso-data-placement:same-cell;}--> More details about this workshop will be added soon.

More details about this workshop will be added soon.

Experiences and lessons learned from working with developing software since 1999 resulted in the “Would Heu-risk it?” concept. It all started with a workshop with Lisa Crispin but has since evolved into a card deck, blog posts, articles and a book draft in 2020.

“Would Heu-risk it?” is centered around risk analysis, heuristics, patterns/anti-patterns that affect us in designing, building, testing and running software. They are grouped into three distinct categories: Traps I see testers fall into, Tools I see testers use as super-powers and weapons testers could use to focus their work (also known as common weak spots in building software).

In this workshop we will combine classic risk analysis with gamification, using the “Would Heu-risk it?”card deck. Gamifying your classic risk analysis helps us overcome unconscious biases and think laterally as we plan our testing. Playing may also entice non-testers to join the party and learn exploratory testing skills. Tried-and-true techniques combined with a new approach mean better outcomes for our customers!

Takeaways

  • How thinking about the three categories can bring new risks to light
  • Techniques to uncover hidden risks while focusing on value for customers
  • How to uncover blind spots in design, solution and testing
  • How you could use games to come up with fresh testing ideas

More details about this workshop will be added soon.

For the workshop, I would like to mix theory with hands-on exercises to give people the opportunity to perform threat modelling.

The theory will cover an introduction to threat modelling, S.T.R.I.D.E. and how your system can defend against threats.

As groups, attendees will put together a data flow diagram for a hypothetical application. They will also perform threat modelling together and discuss together how to build more secure software.

 

Ideally, the space would be set up with several tables with 6 or 7 seats each and either a whiteboard/flipchart or large sheets of paper on each table with suitable markers & pens.

I will then provide a deck of Threat Agent cards per table.

Takeaways

  • A practical understanding of what threat modelling is
  • The belief that they can participate - and even lead - in their team going forwards
  • A stronger appreciation of cybersecurity and its importance

More details about this workshop will be added soon.

Like me, you will have experienced the difficulty faced in putting together quality metrics which are understood by the business and represent user risk & impact. In troubleshooting this with my team we came up with some quality metrics which are user centric and reflect user experience.

Are you struggling with feedback like:

  • No one reads the test reports
  • Reports are too technical and not understood by business stakeholders
  • Reports don't reflect user value
  • Quality numbers don't represent user feedback


In this session, I aim to demonstrate how you can gather qualitative and quantitative user metrics which can be used within your test reports. I will highlight the difference between qualitative and quantitative data as well as which sources you can use to gather valuable user information.

By the end of the session, you will be able to come up with your own user centred quality metrics to take back to your team.

Takeaways

  • Understand what user metrics are in relation to quality reporting
  • Describe the difference between Qualitative & Quantitative quality metrics
  • Write your own user quality metrics for your team

Learn how to build risk mitigating strategies in an interactive, visual and fun way.

Takeaways

  • Understand and practice moderating a RiskStorming session
  • Facilitate prioritising quality aspects as a team
  • Identify and define risks
  • Explore different practices to prevent, mitigate risks or plan contingency plans

More details about this workshop will be added soon.

Have you ever heard of the term Performance Testing? Do you get confused as to what the differences are among load testing, stress testing and soak testing? Have you ever been asked to perform client side and server side performance testing but you’re unsure how to get started? If you’ve answered yes to these questions then this workshop is for you!

This workshop will be a combination of both theory and hands-on learning and will cover the following things:
- Why do we need to test for performance?
- The difference between client side and server side performance testing.
- An overview of what metrics to consider when doing client side performance testing.
- An overview of what metrics to consider when doing server side performance testing.
- Hands-on activity: Let’s measure the performance of your favourite website using Google Lighthouse for client side performance and k6 for server side performance.

After this workshop, you should be equipped with the knowledge and tools to use to get started with performance testing.

Takeaways

  • Understand the difference between client side and server side performance testing
  • Know the different metrics when it comes to performance testing
  • Learn how to run performance test and analyse results from a Google Lighthouse report
  • Learn the technical skills to use k6 so you can write your first load testing script

More details on this workshop will be added soon.

Activities
Activities are hands-on stands with activities and challenges for you to complete.

TestBash

The Most Awesome, Friendly and Affordable Global Software Testing Conferences

Latest Topics and Trends

All our talks are anonymously reviewed by the community meaning we bring you the best talks from a diverse range of speakers.

Affordable

We work hard to make TestBash affordable for everyone, whether your company is paying or you're self funding.

Create Your Own TestBash Experience

TestBash offers multiple ways to learn and connect with others. And with the new 2-days format it helps you easily create your own experience. Pick and choose from talks, 99-minute workshops, activity stands and themed conversations.

Recorded

We record all the talks at our TestBash software testing conferences and make them available to watch on-demand for Pro Members and attendees.

Pro Discount

Pro Members get exclusive discounts to our TestBash conferences.

Frequently Asked Questions

Can I attend for one day only?

No. TestBash UK has been designed for two full days, so if you decide to attend only one, you'll be missing out. We will not be selling tickets for one day only.

Want to Sponsor TestBash UK 2022?

We have packages available for different types of engagement and budgets. Download the brochure and check all the information today.

Will there be any COVID-19 related measures?

For all of our events, we are following the local government guidelines. On top of what the local governments are advising, venues can also have their extra measures in place.

It is the attendee's responsibility to check these and make sure you have everything you need to attend the event. MoT will not be responsible for any access being denied by the venue.

For TestBash UK 2022, here's what the government and the venue are requesting: