Registration is now closed.
Sign up to our mailing list and we will send you updates on things we are working on (including the announcement of any new events).
A one day affordable software testing conference in Brighton on Friday March 28th 2014.
Photos and Blogs
- TestBash 2014 Part 1 – The Fuzzy Tester
- TestBash 2014 Part 2 – The Fuzzy Tester
- TestBash 2014 Part 3 – The Fuzzy Tester
- TestBash 2014 The Conclusion – The Fuzzy Tester
- Live From TestBash – Stephen Janaway
- The TestBash 3 Story – neiltest.com
- My TestBash Hangover Remedy – A Recap – Scott Barber
- Anyone Can Test Software – MoreThanFunctional (Jonathan Roe)
- TestBash Retrospective #2 – Don’t Be Vague, Respect the Bug Report
- TestBash 3 – Rikke Simonsen
- Reflections on TestBash 2014
- Lean coffee at TestBash
- TestBash 3 – Jokin Aspiazu
- Inspiring Times – Chris George
- TestBash 3 – Mike Salsbury
- Reports from TestBash and CukeUp 2014 – Kaisa Piipari
99 Second Talks
Managing Application Performance Throughout the LifeCycle – Scott Barber
Most people think of testing and managing performance as difficult, time consuming and expensive (at best). In fact, even I used to think the same way. But over the past few years I’ve come to realize that it doesn’t need to be any of these things. Of course, there is a trade-off. To deliver reliable application performance, consistently, efficiently and without the expense you might be imagining, everyone needs to be involved. Performance needs to be considered as part of every story. Performance needs to be tested at unit, component, integration, acceptance, regression and production levels. But testing alone isn’t enough, the magic that makes this approach really valuable lies in the trends of the test results data.
Intrigued? During this session, I will share with you the basics of my T4APM™ approach to Performant application delivery. I’ll give you everything you need to get started along the way – with no purchase required. I’ll share stories, examples and case studies from actual projects and clients. I’ll even bait you with the promise of support tools, currently under development, that will serve to make the entire process even easier. If you are interested in helping your company or team advance to the next generation of delivering application performance, this is a session you won’t want to miss.
How to Talk to a CIO About Software Testing (If You Really Have to…) – Keith Klain
The question I get asked more than any other than “How did you get our job”, is “How do you talk to your CIO about software testing”. As software testers are typically not in positions of authority in organisations, this question seems natural, but more important is to find out why aren’t there more testers in management positions. Why do CIOs and senior IT management put people with non-testing backgrounds in charge of such an important function? Through this talk, I will attempt to answer those questions through profiles of CIOs I have worked with and the approach I have taken to tell the testing story, as well, I will offer my opinion why more testers aren’t in management positions and what we can do about it. So put your bias aside and join me as we take a hard look at what’s working (and what’s not working) in software testing management, the culture of skilled testing, how to join the ranks of management without losing your soul.
Contextual Decision-making in Testing – Apathy or Indifference? – Mark Tomlinson
To test, or not to test, that is the choice: whether ‘tis nobler in the mind to suffer the rigors of requirements-based testing or to freshly apply contexts to our thinking and by awareness prevail? Sure, we make choices in our testing practices leveraging our prior experiences and training on the discipline. How we understand ourselves in the engagement of making choices about testing is essential to fully developing your ninja skills as a tester. Beyond your typical learning about exploratory, risk-based and session-based testing techniques, this session will seek to help you take three steps into a more complete understanding of your decision-making as a tester:
- Step one: test choices guided by externally defined influences like models, techniques, tools
- Step two: test choices based on our conscious, internally defined influences and intentions
- Step three: test choices based on our awareness of subconscious, intuitively defined motivation
In this session, participants will engage in exercises to practice these three different perspectives on how we make our choices in testing and apply how we might use apathy (or indifference) in the sequence of our logic; as opposed to positive, outcomes-based test choices. We will share contemporary experiences and explore how we make choices in our focus and attention while designing, improvising and conducting tests. Attendees will learn an alternative way to help manage the deluge of decisions we must make in real-time, exploratory testing; by identifying those items we absolutely do not care about, and why we don’t care about them.
Helping the new tester to get a running start – Joep Schuurkes
When a new tester joins your team – or if you are the new tester joining – the question is how to get this new tester up to speed as effectively as possible. He or she needs to learn about the application, the way of working within the team, the project, etc. In a way it’s quite similar to learning how to navigate a new city. By exploring this analogy we shall see that the most common ways to get a new tester up to speed fall short. Luckily I’ve also encountered some better ways which I’d like to share with you. And as it turns out, those alternatives have some shared properties that are also relevant to good testing in general.
How to win [Developer] friends and influence [business] people – Jez Nicholson
Developers live in a world of black and white, right and wrong, works or doesn’t work. Deep in their hearts they know that testing is “the right thing to do” and something that “should be done” but they can’t quite tell you why. They’ve got 101 things to do before tomorrow, so to a Developer “should be” is the same as “might be”. It is a second thought, something to do after the real work is done. Insist that it “must be done” and it is like nagging your partner to get the boiler fixed.
Business people have heard that testing is “the right thing to do” and something that “should be done” but they have never actually seen or done any themselves so can’t quite tell you why. They want 101 things done before tomorrow, so…you see where this is going.
Testing, Quality Assurance, whatever we are going to call it, needs to talk the language that these two different groups understand. They need to get into their psyche and see what presses their buttons. If approached from the right angle then it will be a big sale. Otherwise testing will always be an also-ran.
Jez Nicholson spent his youth taking things to pieces to see how they worked. By the time he got to working age he was able to put them back together again and even make new things. He has worked as a Developer-Manager for over 20 years for a variety of industries from oil exploration, online game community management, to environmental risk assessment. He builds small, highly effective development teams and has to span the gap between senior management and developers.
Context-driven testing in an agile context – A happy marriage? – Huib Schoots
Testing is an agile environment is different many people say. But what is agile testing? I rather say testing in an agile context instead of agile testing. Agile is different in every project. Excellent testing in an agile context is done by looking at the details of the specific situation first. Remember the 7th principle of context-driven testing: “Only through judgement and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.”
Agile comes with a bunch of cool new methods like TDD, ATDD and BDD. There is a lot of focus on automating testing. There is nothing wrong with these methods and automating … if done right. This talk zooms in on the difference between testing and checking coined by Michael Bolton. Checking can be automated and I claim that in an agile context this should be taken care of by programmer. It is not a good strategy to assign these tasks to the tester.
Creating a huge amount of automated checks that will be executed every time code is integrated is an appealing and comforting thought. But there is much more to do to cover the whole spectrum of testing. This is where testers can contribute with lots of value. We are set out to find new information about the software by exploration and learning. Join me to learn more about context-driven testing in an agile context.
Automation: Time to Change Our Models – Iain McCowatt
Struggling to figure out how to get ROI on your automation? Wondering which of your tests to automate? Or are you trying to determine what level of automation your organization should target? If so, you may already be heading for a costly mistake.
Each of these throught processes is rooted in a commonplace model as to how to think about test automation. Each attempts to solve a particular problem, to mitigate a particular risk. But, whilst some models are useful, every model is wrong. How useful are these models? What are their drawbacks, and how might they prevent us from seeing how we might harness the power of tools?
Easing the Pain of Legacy Tests – Chris George
Legacy Tests – We’ve all got them. They were written at the dawn of time by people who have long since left. They’re unreliable, they add weeks to release cycles, the effort to learn and fix them is thought to be immense… yet we put up with them because they are perceived to be too valuable to lose.
This is the story of how we built a case to fix this problem; how we turned the tests around with the combined effort of testers and developers over a two week sprint; and turned them into a suite of fast and reliable trustworthy tests reducing our release cycle from weeks to hours. In this session I’ll be sharing the lessons we learned the hard way (so you don’t have to!), as well as the techniques and methodologies we used along the way. By the time we’re finished, you’ll be ready to cauterize, triage and heal your legacy test pains with surgical precision.
Inspiring Testers – Stephen Blower
I’ve inspired testers to question more, to have a thirst to learn, to challenge the status quo and to develop their skills. How do you enable testers to understand that their role is skilful, challenging and rewarding? I’ll show you.
Reframe the traditional idea of a tester’s role. It isn’t one requiring little skill or knowledge, it should be stimulating and fulfilling, creating a powerful platform to build from. Using this platform, I’ll share my own experiences and specific examples that have been successful in inspiring testers, developing their skills and making them into more than just button pushers.
When you believe in a tester’s abilities and encourage them to explore and challenge, that freedom of expression will inspire them even more. And, inspiring just one average tester to become a great tester can have a snowball effect on the rest of the team. This can then create a great team of testers that grow and learn together, striving to improve and never accepting the status quo.
Get Out of The Testing Game – Bill Matthews
Is my testing good enough? How do I improve my testing processes?
Earlier in my career, these were questions that I’d frequently ask of myself and others; the answers often came in the form of deliverables such as standards, plans, test cases, schedules and endlessly trying to optimise processes – I was in the Testing Game. It’s a game were we become overly focused on testing as an end in itself rather than a means to an end; we lose sight of the wider context. At some point I realised that projects were not really interested in testing only what testing gives them so I knew I had to get out of the Testing Game and into the Information Game.
This shift is sometimes difficult for people to grasp and a model I’ve found helpful is a variant of the Business Model Canvas; while typically used to describe and communicate a business model it can also be used to describe how testing fits within a wider context and focuses attention on aspects such as interactions, relationships and the flow of value.
In this talk I’ll present an example of how the model helped a testing department better understand their place in the wider organisation and better align their work with the needs of projects and the surprising set of process improvements that it led to!
Our Super Dooper Micro-Sponsors