Test.bash(); 2019

The second Test.bash();, our technical testing and automation focussed conference, took place in Manchester on 4th October 2019. It was bigger and better than last year, with more people, more things to do and nine brilliant talks.

All the awesome talks were recorded and have been made available in this series, some are free to watch and others require Pro Membership... so get stuck in!

Join the discussion about TestBash Manchester over at The Club.

We would like to thank our TestBash Manchester 2019 event sponsors, Lloyds Banking Group, ApplitoolsEqual Experts, Scott Logic and Capital One for supporting this software testing conference and the software testing community.

If you would like to attend TestBash or any of our events then please check our latest schedule in our events section.

Watch all the talks from the event:
Test.bash(); 2019
PRO

Sponsors

Schedule

Monday, 30th September 2019

What Do We Mean By ‘Automation in Testing’?

Automation in Testing is a new namespace designed by Richard Bradshaw and Mark Winteringham. The use of automation within testing is changing, and in our opinion, existing terminology such as Test Automation is tarnished and no longer fit for purpose. So instead of having lengthy discussions about what Test Automation is, we’ve created our own namespace which provides a holistic experienced view on how you can and should be utilising automation in your testing.

Why You Should Take This Course

Automation is everywhere, it’s popularity and uptake has rocketed in recent years and it’s showing little sign of slowing down. So in order to remain relevant, you need to know how to code, right? No. While knowing how to code is a great tool in your toolbelt, there is far more to automation than writing code.

Automation doesn’t tell you:

  • what tests you should create
  • what data your tests require
  • what layer in your application you should write them at
  • what language or framework to use
  • if your testability is good enough
  • if it’s helping you solve your testing problems

It’s down to you to answer those questions and make those decisions. Answering those questions is significantly harder than writing the code. Yet our industry is pushing people straight into code and bypassing the theory. We hope to address that with this course by focusing on the theory that will give you a foundation of knowledge to master automation.

This is an intensive three-day course where we are going to use our sample product and go on an automation journey. This product already has some automated tests, it already has some tools designed to help test it. Throughout the three days we are going explore the tests, why those tests exist, our decision behind the tools we chose to implement them in, why that design and why those assertions. Then there are tools, we'll show you how to expand your thinking and strategy beyond automated tests to identify tools that can support other testing activities. As a group, we will then add more automation to the project exploring the why, where, when, who, what and how of each piece we add.

What You Will Learn On This Course

Online
To maximise our face to face time, we’ve created some online content to set the foundation for the class, allowing us to hit the ground running with some example scenarios.

After completing the online courses attendees will be able to:

  • Describe and explain some key concepts/terminology associated with programming
  • Interpret and explain real code examples
  • Design pseudocode for a potential automated test
  • Develop a basic understanding of programming languages relevant to the AiT course
  • Explain the basic functionality of a test framework

Day One
The first half of day one is all about the current state of automation, why AiT is important and discussing all the skills required to succeed with automation in the context of testing.

The second half of the day will be spent exploring our test product along with all its automation and openly discussing our choices. Reversing the decisions we’ve made to understand why we implemented those tests and built those tools.

By the end of day one, attendees will be able to:

  • Survey and dissect the current state of automation usage in the industry
  • Compare their companies usage of automation to other attendees
  • Describe the principles of Automation in Testing
  • Describe the difference between checking and testing
  • Recognize and elaborate on all the skills required to succeed with automation
  • Model the ideal automation specialist
  • Dissect existing automated checks to determine their purpose and intentions
  • Show the value of automated checking

Day Two
The first half of day two will continue with our focus on automated checking. We are going to explore what it takes to design and implement reliable focused automated checks. We’ll do this at many interfaces of the applications.

The second half of the day focuses on the techniques and skills a toolsmith employs. Building tools to support all types of testing is at the heart of AiT. We’re going to explore how to spot opportunities for tools, and how the skills required to build tools are nearly identical to building automated checks.

By the end of day two, attendees will be able to:

  • Differentiate between human testing and an automated check, and teach it to others
  • Describe the anatomy of an automated check
  • Be able to model an application to determine the best interface to create an automated check at
  • How to discover new libraries and frameworks to assists us with our automated checking
  • Implement automated checks at the API, JavaScript, UI and Visual interface
  • Discover opportunities to design automation to assist testing
  • An appreciation that techniques and tools like CI, virtualisation, stubbing, data management, state management, bash scripts and more are within reach of all testers
  • Propose potential tools for their current testing contexts

Day Three
We’ll start day three by concluding our exploration of toolsmithing. Creating some new tools for the test app and discussing the potential for tools in the attendee's companies. The middle part of day three will be spent talking about how to talk about automation.

It’s commonly said that testers aren’t very good at talking about testing, well the same is true about automation. We need to change this.

By the end of day three, attendees will be able to:

  • Justify the need for tooling beyond automated checks, and convince others
  • Design and implement some custom tools
  • Debate the use of automation in modern testing
  • Devise and coherently explain an AIT strategy

What You Will Need To Bring

Please bring a laptop, OS X, Linux or Windows with all the prerequisites installed that will be sent to you.

Is This Course For You?

Are you currently working in automation?
If yes, we believe this course will provide you with numerous new ways to think and talk about automation, allowing you to maximise your skills in the workplace.
If no, this course will show you that the majority of skill in automation is about risk identification, strategy and test design, and you can add a lot of value to automation efforts within testing.

I don’t have any programming skills, should I attend?
Yes. The online courses will be made available several months before the class, allowing you to establish a foundation ready for the face to face class. Then full support will be available from us and other attendees during the class.

I don’t work in the web space, should I attend?
The majority of the tooling we will use and demo is web-based, however, AiT is a mindset, so we believe you will benefit from attending the class and learning a theory to apply to any product/language.

I’m a manager who is interested in strategy but not programming, should I attend?
Yes, one of core drivers to educate others in identifying and strategizing problems before automating them. We will offer techniques and teach you skills to become better at analysing your context and using that information to build a plan towards successful automation.

What languages and tools will we be using?
The current setup is using Java and JS. Importantly though, we focus more on the thinking then the implementation, so while we’ll be reading and writing code, the languages are just a vehicle for the context of the class.

Mark Winteringham

I am a tester, coach, mentor, teacher and international speaker, presenting workshops and talks on technical testing techniques. I’ve worked on award winning projects across a wide variety of technology sectors ranging from broadcast, digital, financial and public sector working with various Web, mobile and desktop technologies.

I’m an expert in technical testing and test automation and a passionate advocate of risk-based automation and automation in testing practices which I regularly blog about at mwtestconsultancy.co.uk and the co-founder of the Software Testing Clinic. in London, a regular workshop for new and junior testers to receive free mentoring and lessons in software testing. I also have a keen interest in various technologies, developing new apps and Internet of thing devices regularly. You can get in touch with me on twitter: @2bittester


Richard Bradshaw
Richard Bradshaw is an experienced tester, consultant and generally a friendly guy. He shares his passion for testing through consulting, training and giving presentation on a variety of topics related to testing. He is a fan of automation that supports testing. With over 10 years testing experience, he has a lot of insights into the world of testing and software development. Richard is a very active member of the testing community, and is currently the FriendlyBoss at The Ministry of Testing. Richard blogs at thefriendlytester.co.uk and tweets as @FriendlyTester. He is also the creator of the YouTube channel, Whiteboard Testing.

Friday, 4th October 2019

Consumer Driven Contracts is a testing paradigm that let API-consumers communicate to the API-providers how they are using their services. This talk discusses software testing, how and when to use Consumer Driven Contract, and how Consumer Driven Contracts can make developers more confident. It also includes live coding to show how to implement Consumer Driven Contracts using the Pact framework.

To increase the velocity and reduce the cost of microservices development, it is key to be able to build and deploy new versions with confidence that you don't break any dependencies. Microservices are easy to build and run, but they quickly become a tangled web of dependencies that slow down development and result in broken dependencies. Organisations that transition from a traditional monolithic design to a microservice architecture, will soon realise that it is hard to keep track of all dependencies. Consumer Driven Contracts is a testing paradigm that helps developers keep control on all dependencies in a distributed system and this talk will tell more about how to use it and show how to do so in a live demo.

Takeaways

You will learn the following:

  • Why we want to create consumer contracts
  • How to create consumer contracts using Pact
  • How to use this to implement continuous deploy using consumer contract verification
Henrik Stene
Henrik is a manager and consultant with the Nordic consultancy firm, Knowit, and is currently assigned to help build a new sales and ticketing solution for rail operators in Norway. He is passionate about exploring microservice technology and is pondering how to solve their biggest flaws, like the "Integration Test Hell".

In the last few months, when I helped a lot of people in different automation communities, I saw a lot of people overestimating the "ease" of using Appium with native apps. It seemed that most of them thought that they could use their Selenium and HTML knowledge to automate their native app, but in almost all the cases they found out they made the wrong assumption.

In this talk I'll discuss some key differences between using Selenium and Appium when automating a hybrid and or native app. We'll dive into some most common made assumptions by exploring the differences with a native app. I'll give examples and tips how to deal with those differences and to prevent the assumptions in the future.

Takeaways

Gather more knowledge about the differences between web and native apps by

  • exploring the differences in the DOM, for web, and the UI-hierarchy for native apps (iOS and Android)
  • determining the best locator strategy for automating native apps
  • discussig some usefull ways to use for example:
    • crossplatform gestures
    • validating texts
    • and much more
Wim Selles
Wim Selles is a Solutions Architect for Sauce Labs based in the Netherlands. During the day, he assists customers with solving automation challenges in their organisation. By night, he practices his passion for front-end test automation with Javascript. He likes to create his own Node.JS modules to help and support automation engineers and is also a contributor to multiple open source projects that involve testing, such as WebdriverIO, Protractor, ng-Apimock and many more. Wim also has extensive experience using Appium for automating Hybrid and (React) Native Apps. He enjoys sharing his automation experience as a speaker at conferences like AppiumConf in London and SeleniunConf India, on his blog and during meetups and webinars.
Record and playback features are a really useful tool for automated test development. They allow code req uired to run automated tests to be generated quickly. However, there is an unfortunate misconception that it is only used by those with poor programming skills. 
 
I do not believe this is the case, as even those with excellent programming skills may choose to use record and playback as an initial code generator. 
 
I also believe that some may feel excluded from test automation because of their lack of programming skills. Record and playback features are an excellent way to introduce people to automated test development. 
 
Record and playback features does have its limits. There are some actions that cannot be recorded so need to be coded instead. Tests made up entirely of recordings can be awkward to maintain, and have difficulty recovering from failures. Therefore, any code generated from recordings needs to be changed to improve the maintainability and robustness of the tests. In this talk I will demonstrate how recordings and the code generated from recordings can be adapted when developing automated test cases. 

Takeaways

  • The benefits of record and playback for mixed skill test teams
  • The benefits of record and playback as an initial code generator
  • The limitations of record and playback features and why they should not be relied upon as the sole method for developing automated tests. 
  • How to adapt code generated using record and playback to develop automated test cases that are useful and more maintainable. 
Louise Gibbs
Louise recently started work as a Senior QA Analyst at MandM Direct, an online sportswear retailer. Before this, she worked at Malvern Panalytical, a company that developed scientific instruments for a variety of industries, most notably pharmaceuticals. She was involved in testing the software used to carry out automated particle imaging and raman spectroscopy to identify particles in a mixed sample. Louise graduated from Bangor University with a degree in Computer Science for Business. Her first job after university was as a software tester for Loyalty Logistix, a company that produced Web, Mobile and Desktop applications that allowed members of the Automotive industry to run Loyalty schemes for customers.
Removing web driver out of the picture is a process somehow similar to peeling an onion. You keep revealing layers of responsibilities, integrations web driver tests cover indirectly, that need to replaced by some specialised tests. Looking at the onion as a whole, and not as a collection of layers bound to each other, is easy at first, and web driver tests help us keep blind to all the integrations taking place between web browser, frontend, and backend of our applications. But over long time costs us a lot. We pay in maintenance cost, in execution time, and troubleshooting of problems spotted by such tests. Separation of concerns always been a problem in frontend development, bud modern frameworks prove to get better at it. With new wave of frontend frameworks, can we replace web driver tests with something else? I'm going to present you automated test strategy that does exactly that. Replaces web driver tests with specialised tests covering every single concern. Starting from testing backed and frontend communication with contract tests and mocking, and finishing with web browser and frontend integration, covered by component tests implemented with cypress running against light storybooks.

Takeaways

  • Learn what your usual WebDriver test covers
  • And how you can replace it with Cypress
  • Contract tests,
  • Frontend DOM tests,
  • Frontend Unit tests,
  • And why you still may need WebDriver but not in your pipeline
Bart Szulc
Bart Szulc is tester at heart. One could say, born to test. Keeping hands dirty with automation and scripting since started professional career. Designing strategies, delivering frameworks and test environments for web and mobile applications. Actively involved in local testing communities. Presenting on most popular testing and development conferences in Poland and Europe. Helping developers become better explorers. Teaching software development and testing. In love with big data and statistical analysis. Strongly believes in unicorns. Currently on trial to become one as full cycle engineer.

Martin Fowler said he thought this was the best book to come out in 2018*, which is high praise indeed since he also published his own a book that year! The authors are Nicole Forsgren, Jez Humble and Gene Kim, and the book is about the research they've spent several years on, investigating what works in software and DevOps in particular. I think what they've discovered is really important and deserves a wide audience.

They've identified a causal relationship between business success and the technical practices that software teams use in their daily work. Their analysis is based on thousands of data points from organizations around the world. According to the research, DevOps means (amongst other things) a healthy information-sharing culture, lots of automation, and that employees are less likely to suffer burnout.

In this talk, you will learn some of the main conclusions of the research and in particular what it means for testers.

* https://martinfowler.com/articles/agile-aus-2018.html
Emily Bache
Emily Bache is a Technical Agile Coach with Praqma. She helps teams to improve their coding and testing skills, including Test-Driven Development. Emily lives in Göteborg, Sweden, but is originally from the UK. She is the author of "The Coding Dojo Handbook" and often speaks at international conferences.

Quite often we struggle to test all scenarios due to the limitation of test data especially when apps rely on 3rdparty services and testing at different territories is required. Depending on the testability of your product, it can be hard to check all corner cases if the production back-end must be used and cannot be tricked. With a proxy, you can change what will be displayed.

Charles Proxy is known as Man-In-The-Middle and is loaded with features. Charles not only allows us to learn more about how our app functions the way it does, but it also opens our eyes to an entirely new array of testing possibilities to explore. Charles Proxy can help to identify the cause of the bug that relates to your app making/receiving network calls/responses!

Takeaways

  • How to use Charles on iOS and Android devices.
  • Observing and modifying HTTP Requests and Responses
  • How to add a breakpoint and when not to use breakpoints
  • Rewriting using map local and remote
  • Network throttling
  • Charles for iOS; enabling you to capture traffic directly on iOS device.
Suman Bala

Suman Bala is Test Lead at Sky. She strives to mature testing practice as a key enabler to drive business change benefits and is a strongly believer in Test Automation. In the past 12 years, she has designed & developed automation framework from scratch for various products from middleware graphics library, e-commerce & mobile apps. She is quality evangelist who is passionate about providing continuous value adds through leadership, problem solving and encouraging efficiency. She feels proud on how people’s prospective have changed regarding testing throughout her career.


Testers work best being creative and inquisitive. We’re creatures who thrive on the new and the question “What if?”, But often setting things up to see what if and examining them to see what has actually happened is long and tedious work. It doesn’t have to be.

Many of us have access to the Bash terminal emulator and the suite of Unix commands that it offers us. Whether it’s through the Windows Subsystem for Linux, natively on MacOS and Linux on the desktop or in Linux machines in the cloud, many testers have access to a powerful, flexible and free suite of tools that they might not know about. This talk will explain a few of the more common and useful Unix tools, like ls, grep, xargs and how you can use them in concert to automate find and replace across gigabytes of files, or extract and log only the relevant error messages from an application’s output in real time.

We’ll explain how to chain commands together using the mythical pipe operator, what you can do with punctuation to make your life easier and give you a taste of the scope of what’s possible with Unix commands.

Takeaways

  • Learn how to use curl, grep, ls, cat, xargs, echo and tee.
  • How to use the pipe operator to pass data between these tools.
  • How to get help, man pages and the -h argument.
  • How to use punctuation to expand, recover and augment your command chains
  • What the next steps are: Bash scripting and execution.
Dominic Kua
A software tester, gardener, cook, father, gamer, general reader and scientist from Manchester, living in the East of England.
Python enthusiast

Hardly a week goes by without a report of some demonstration of the Unintelligence of AI systems; some are funny and harmless but others more serious and harmful. Most testers are already aware that AI systems can be challenging to test and the evidence of Unintelligence in such systems suggests that we don't yet have a clear model of how to approach testing of AI system.

In this talk I will share a framework for examining Risks (Business, Ethical and Technical) related to AI Systems that I have been developing as ways for testers to think critically about what is important to test. In particular, this talk will focus on the Technical Risks that come from the AI Architecture, Data and Learning.

Takeaways

  • Understand the breadth of the Risks from the Business, Ethical and Technical viewpoints
  • Understand in more detail the Technical Risks
  • Strategies and tactics for testing Technical Risks in AI systems
Bill Matthews

Bill Matthews has been a freelance test consultant for over 20 years working mainly on complex integration and migration as a Test Architect and as a Technical Lead. He champions the use of modern and effective approaches to development and testing.

He is a regular contributor to the testing community at both local and international levels through conference speaking, coaching/mentoring and delivering workshops and training focusing on automation, performance, reliability, security testing and more recently artificial intelligence.


First proposed by Google in 2015, Progressive web apps (PWAs) are now in the infant stage of their development. As with any child, people are curious about which developmental direction the technology will take.

What are they though? Are they a thick client/native app that runs in the browser or a web app with some thick client/native app functions? I will help with defining this in this talk and draw out how that definition impacts the testing that needs to be done as a result.

At a simplistic level, a PWA is a layering of technology types, almost a miniature integrated system in itself. It has its own rules on how it will behave depending on how it is built, how it is accessed, the options the end user chooses and any hardware available. As testers, we don’t have the luxury of waiting until an app is all grown up before we start to test it. So how can we test PWAs whilst remaining faithful to shift-left principles?

A couple of years ago, we entered a programme of work that intended to produce multiple PWAs for the financial industry. Never having worked with them before, we did a lot of research and asked industry colleagues for help in understanding the tech. However, there was little to no help forthcoming on how to test for this type of app as the tech was so new. This meant we had to come up with our own! Drawing on comparable experience, trial, error (lots of error) and some tools, we formulated a series of heuristics, automation, methods and processes for testing which we are here to share.

Takeaways

  • An understanding of what a PWA is
  • The differentiators between PWAs and native/hybrid/web apps
  • A dissection of the possible tech layers and how to test them in isolation and together
  • How to work out the risks in a PWA
  • How to test for platform-specific issues
  • Getting started - how to find the low hanging fruit type issues using Lighthouse
  • More in-depth - testing a PWS with Selenium and Appium
Elizabeth Fiennes

Elizabeth has been in testing and QA since 1998. Yes, they had computers way back then. No, my first tablet was not made out of stone :)

Since then, she has taken some time off for two people shaped development projects of her own. She describes herself as “cat slave” to a very large and opinionated Tuxedo Tom who likes walking across keyboards and spilling tea.

Doing talks and writing blogs were not something she was comfortable with so she challenged herself to start doing them in 2018. One of the happiest results of this experiment was making new wonderful friends which is one of the best outcomes of breaking out of any comfort zone.


Callum Akehurst-Ryan

Callum is a Senior Test Engineer at Scott Logic with 11 years of experience across multiple domains from finance to public safety. His technical skills and keen interest in exploratory testing techniques are backed up with a passion for team engagement and the advocation for the integration of testers into agile teams. Using his background in psychology, Callum has recently been engaging teams with quality narratives surrounding human factors; providing insights on how individual and diverse users will engage and experience their products in different ways.

In his spare time he’s also a kick-ass Dungeon Master and blogs about testing.