Test.bash(); 2018

Test.bash(); was a TestBash specifically focused on technical testing and automation, our first time doing a themed TestBash and it was a huge success!

All the talks were recorded and have been made available in this series, some are free to watch and others require Pro Membership... so get stuck in!

Join the discussion about TestBash Manchester over at The Club.

We would like to thank our TestBash Manchester 2018 event sponsors, Mailosaur, Applitools, CDL Software, LateRooms.com, Bomgar, Equal Experts for supporting this software testing conference and the software testing community.

If you would like to attend TestBash or any of our events then please check our latest schedule in our events section.

Watch all the talks from the event:
Test.bash(); 2018



Friday, 28th September 2018

This is a case study of how the Mobile Platform teams test automation has evolved over the last 5 years. From rounds of manual regression testing then onto automated UI testing now to isolated code level tests and what’s next for the team.

I’ll go into what worked for the team and what didn’t and how we overcome the problems we faced.

This talk will also detail what we plan to do for the future of testing within the Mobile Platform team.

Key takeaways:

  • Testing: What has and hasn’t worked with our testing from manual to automated testing and everything in between
  • Pairing: How we collaboratively pair tester and developers to write code level tests
  • Team structure: How our teams are organised and why
  • Risk Vs Value: How we balance the risk of not testing Vs value of testing it
  • Feedback: What new feedback mechanisms we are looking at to understand the quality of our products and feeding this back into development

Jitesh Gosai
Over the course of the last 15 years as a Test professional I've strived to help the teams I've worked with be the best they can. I've seen first hand what does and doesn't work in improving quality of our products across the software industry. I now want to take these experiences and help others make their teams be the best they can by improving quality through testability.

The journey your code change goes on from idea to benefiting the end user depends on a lot of things which are technology dependent like investment in automation and application tech stack but also business dependent like organisational structure and risk profile. Technologists around the world point to their deployment pipelines to allay any fears their business stakeholders may have about risks when changing the software. The thing is, just having an automated pipeline does not guarantee full confidence in releasing changes to your software in large part because not enough teams are looking at the implementation and architecture of their pipeline.

This talk will look at the complexities that arise from different software architectures. Do you have a monolith? Or maybe your application is a monolith deployment but actually is spread across multiple repositories? How does being an independently deployable service architecture impact your delivery pipeline? For all the job titles, working groups, and decision making that goes into software architecture, this talk will explore the implications and support system required for your deployment pipeline to balance the contextual needs, pros and cons of different choices, and good practices in the wider industry.

Three main takeaways will be:

  1. Examples of how to apply the same architectural awareness and evolution of software to delivery pipelines
  2. Commonly used patterns to build confidence in software which interacts with other systems like 3rd party application and custom libraries
  3. How to incorporate clean code methodologies and practices to the creation of delivery pipelines

Abby Bangser

Abby Bangser is a software tester with a keen interest in working on products where fellow engineers are the users. Abby brings the techniques of analysing and testing customer facing products to tools like delivery pipelines and logging so as to generate clearer feedback and greater value. Currently Abby is a Test Engineer on the Platform Engineering team at MOO which supports the shared infrastructure and tooling needs of the organisation.

Outside of work Abby is active in the community by co-leading Speak Easy which mentors new and diverse speakers, co-hosting the London free meetup Software Testing Clinic which brings together mentors and new joiners to the software testing industry, and co-organising European Testing Conference 2019. You can get in touch easiest on Twitter at @a_bangser.

We are often reminded by those experienced in writing test automation that code is code. The sentiment being conveyed is that test code should be written with the same care and rigor that production code is written with.

However, many people who write test code may not have experience writing production code, so it’s not exactly clear what is meant by this sentiment. And even those who write production code find that there are unique design patterns and code smells that are specific to test code in which they are not aware.

Given a smelly test automation code base which is littered with several bad coding practices, we will walk through each of the smells and discuss why it is considered a violation and demonstrate a cleaner approach.

Key takeaways include how to:

  • Identify code smells within test code
  • Understand the reasons why an approach is considered problematic
  • Implement clean coding practices within test automation

Angie Jones
Angie Jones is a Senior Developer Advocate who specializes in test automation strategies and techniques. She shares her wealth of knowledge by speaking and teaching at software conferences all over the world, as well as writing tutorials and blogs on angiejones.tech. As a Master Inventor, Angie is known for her innovative and out-of-the-box thinking style which has resulted in more than 25 patented inventions in the US and China. In her spare time, Angie volunteers with Black Girls Code to teach coding workshops to young girls in an effort to attract more women and minorities to tech.

With testers becoming embedded in development teams and those teams adopting practices such as DevOps, Continuous Delivery and Lean Agile, the need to create tools to assist testing becomes ever more important. Anything you can do to speed up your testing and build a greater understanding of code, architecture and systems can be very beneficial. While developers are sometimes better specialised to help build tools, they are not always particularly motivated to learn the technologies that are more beneficial to testing. Its therefore very handy to build your own skills in learning to use more technical tools and coding to build your own tools or bend them to your needs.

In this talk I hope to share my experiences coming into testing as a competent programmer, the challenging testing situations I have faced and how I've created or used tools to assist my testing. These range from the common tools such as Postman, Selenium, browser dev tools and server logs to the more bespoke or specific examples such as data generators for message queues, automating SIP phone calls and complex data queries against tech such as Elasticsearch. From my experiences teaching and mentoring, I also hope to share my observations of the challenges of learning programming, the common stumbling blocks and tips and tricks for getting started.


  • Plenty of ideas for tools they could create or aspects of testing they could automate in future.
  • A feeling that programming is something they can learn and isn't as scary as it may seem.
  • Encouragement that they absolutely have a place in a very technical environment and there are plenty of ways they can be useful and adapt.

Matthew Bretten

Matthew has been testing software for 7 years, starting as a video games tester and is currently a Test Team Lead. Having graduated in Computer Games Technology, he originally wanted to become a developer but quickly discovered a deep passion for testing. His career has followed the trend of the software industry, going from testing a long distance away from developers and code to pairing with developers and helping them test as they write code. Along the way he has gained a great variety of experience testing telephony exchanges, analytics systems, websites, video games (including motion controls, 3DTVs, augmented reality) and mobile apps.

Through this background in computer science and his experience as a tester, Matthew is keen to help breakdown technical subjects and jargon for testers and expand their arsenal of test techniques!

You’ve started writing unit tests for your applications but aren't quite sure what mocks and spies are about? You sometimes run into trouble because you have so many dependencies in your tests? You don’t know how to test your code that calls an API? Well, this session could help you out! Find out how test doubles come in handy when you’re test driving your code.

In this talk you’ll learn about the different types of test doubles and their purpose. I’ll demonstrate how they can help you make your test driven life a lot easier. There will be code examples for rolling your own test doubles and also for using doubles provided by one of the popular testing frameworks.

Attendees will learn how test doubles can simplify their tests and make the untestable testable. They will also learn to distinguish between the different types of test doubles, which ones to use in which situation and why what we colloquially call a “mock” isn’t always a mock.

Rabea Gleissner

Rabea works as a software developer at 8th Light, a software consulting company that follows the Software Craftsmanship principles. She has worked as a software developer for almost four years now, after changing careers from digital marketing - one of the best decisions she has ever made. Rabea is passionate about encouraging women to join the tech industry and is a voluntary instructor at Code First:Girls.

Maybe you’ve been testing the same application for a while, and your rate of finding new bugs has slowed. Or you’re trying to find more ways to figure out what your devs are doing day to day. You have the tools at your disposal, you just need to dig in!

In this talk, Hilary Weaver-Robb shares tools and techniques you can use to take your testing to the next level. See everything the developers are changing, and learn to find the most vulnerable parts of the code. These tools and techniques can help you focus your testing, and track down those pesky bugs!


  • tools to do static analysis on the code
  • using those tools to find potential bugs
  • using commit logs to figure out what's being changed
  • that it's helpful to dig into the code of the application under test

Hilary Weaver-Robb
Hilary Weaver-Robb is a software quality architect at Detroit-based Quicken Loans. She is a mentor to her fellow testers, makes friends with developers, and helps teams level-up their quality processes, tools, and techniques. Hilary has always been passionate about improving the relationships between developers and testers, and evangelizes software testing as a rewarding, viable career. She runs the Motor City Software Testers user group, working to build a community of quality advocates. Hilary tweets (a lot) as @g33klady, and you can find tweet-by-tweet recaps of conferences she’s attended, as well as her thoughts and experiences in the testing world, at g33klady.com.

When it comes to iOS app development, Swift is becoming the top choice among iOS developers. It’s because of speed, type safety and simplicity of the Swift programming language. Apple also launched Xcode UI testing framework a.k.a XCUITest to test those apps written in Swift. XCUITest is extension of XCTest framework which is Apple’s unit, network and performance testing framework. Using XCUITest, we can write UI Tests in Swift and put the UI Test code in the same repository as application code which makes collaboration with developers and CI/CD practices much smoother. The traditional tools like Appium and Calabash doesn’t fits well in the native app development with Swift. Although, XCUITest has recorder to get started with UI testing, we need to use some patterns to make XCUITest more scalable. Unlike the Page Object Pattern Or Screenplay Pattern in the web testing, we need to organise XCUITests using some sort of test design pattern to make them scalable.

Swift is designed to be a protocol oriented programming language and it has some awesome features like protocols, extensions, enumerations. The patterns like Page Objects or Screenplay may work somehow but they doesn’t fit in the protocol oriented way of Swift. We can use protocol oriented approach to architect XCUITests that can be scaled easily within iOS CI/CD pipelines. In this talk, we will discuss

  • Protocol Oriented architecture for XCUITest and using Swift features like protocols, extensions and enumerations to write XCUITests.
  • How to organise XCUIElements using Swift extensions for better reuse
  • How to architect XCUITests for both iPhone and iPads without code duplication
  • Setting up XCUITests within iOS CI/CD pipelines
  • Tips for writing CI friendly XCUITests. e.g Stubs, Accessibility Identifiers, Real Device Tests, Xcode scheme strategy for UI Tests etc
Shashikant Jagtap

Shashikant is passionated about DevOps, CI/CD and Test Automation practices for iOS apps. He uses native Apple developer tools to automate iOS release pipelines with solid test automation. Currently his toolbox includes Swift, XCTest, Xcode Server, Fastlane, Multiple native Apple developer tools. He blogs regularly on iOS DevOps and Test Automation on his personal blog (XCBlog), Medium and DZone.

Like many power-tools, Selenium has a host of features that are designed to be used one way and end up being used another. In this technically focused talk, we'll cover the intended use and the observed abuse of some of these features. We'll cover everything from the proper way to wait, find and interact with elements, and even how to make your test runs fast and stable.


  • Better understanding of how Selenium works
  • Knowledge on how the waiting strategies interact
  • How to use the Actions APIs
  • Extending Selenium to better support your testing

Simon Stewart
Simon Stewart is the creator of WebDriver, the Open Source browser automation tool, and is the Selenium project lead. He previously led the build tool team at Facebook, developing the graph-based build tool Buck, and he's a strong advocate of monorepos. Before joining Facebook, he spent almost five years at Google, and three at ThoughtWorks. He’s seen a lot of code and is undeniably hairy.