TestBash Australia 2018

Testbash australia 2018 adverts dojo event banner
Friday, 19th October 2018

TestBash, our software testing conference, is heading down under to Sydney, Australia! In collaboration with Anne-Marie Charrett (Testing Times) and David Greenless

We will be heading to Ariel.

This is a 1-day event. Friday will be busy, a single-track conference consisting of 9 talks and our famously fun 99 second talks

You can expect to find a wonderful community coming together in a friendly, professional and safe environment. We think you'll feel right at home as soon as you arrive!

Conference day tickets start from $500 (Auzzie dollars of course) inc GST. Tickets are limited!

Event Sponsors:
Conference
Friday, 19th October 2018

I know something you don’t know. You know something I don’t know. And I don’t know what you know that I would need to know. This is where individual contributor approach to software development and testing breaks down. Why aren’t we working together, contributing together and learning together? Why do we, often at best, collaborate on the requirements and understanding of what to build, and then step away for implementation, only to come back to test it after?

This talk looks into my experiences in pairing (two people one computer) and mobbing (more than two people one computer), and the wonders of being a non-programming tester whose ideas get translated into code as equal. The journey to get to pairing and mobbing has been a rocky road, with loads of practical tips to offer on how to approach it.

In software development, those who learn the fastest do best. Could pairing and mobbing take us teamwork to the next level by enabling learning between everyone? Lessons specific to skillsets rub in both ways, leaving everyone better off after the experience.

Maaret Pyhäjärvi
Maaret

Maaret Pyhäjärvi is a software professional with testing emphasis. She identifies as an empirical technologist, a tester and a programmer, a catalyst for improvement and a speaker. Her day job is working with a software product development team as a hands-on testing specialist. On the side, she teaches exploratory testing and makes a point of adding new, relevant feedback for test-automation heavy projects through skilled exploratory testing. In addition to being a tester and a teacher, she is a serial volunteer for different non-profits driving forward the state of software development. She was recently awarded as Most Influential Agile Testing Professional Person 2016. She blogs regularly at http://visible-quality.blogspot.fi and is the author of Mob Programming Guidebook.


In the olden days, computing devices used machine input to guide users. These devices took machine input over user input to open doors, auto-forward calls, control switches, record activities, and also learn users behaviors automatically, over a period of time. Today, if we as users, feel out of control while using mobile apps, products slap a new screen on us. If we feel helpless, we are tossed with multiple options to choose from. Many organizations think that this “screen-slapping” phenomenon makes users happy. In the name of freedom and control, users are fooled with screens, interfaces, dropdowns, never-ending forms and so forth – on limited real estate devices like smartphones. We are enslaved by the current generation of mobile apps that are neither intelligent, nor intuitive, and rather painful to use. Is there a problem here? Of course, yes!

Machine input is any information that digital devices can find on their own, whenever and wherever possible. In this talk, Parimala Hariprasad introduces you to machine input, and how it can be put to smart use to create intuitive mobile apps in the digital world. She emphasizes on the need for mobile apps to adapt to users rather than users adapting to them. She also highlights how a generous use of machine input empowers mobile apps to serve users in a real-time setting. Using vivid examples, she takes the audience on an exciting journey where mobile apps sense users’ needs “automagically” and serve them without seeking tedious user input or slapping multiple screens/interfaces on them. The talk centers around applying self-learning, self-healing and self-awareness methods that give users what they need, exactly when they need it and stay relevant at all times.

Key takeaways include learning about machine input, different types of machine input like camera, contextual input, smart defaults, magnetometers and key principles that govern intuitive mobile apps. The audience will walk away with practical knowledge and experience of how machine input can be applied to mobile apps context along with learning key techniques that empower machines to transform users’ needs into great user experiences.

Let us allow mobile apps serve users and not the other way around!

Parimala Hariprasad
Parimala hariprasad

Parimala Hariprasad is Senior UX Architect for Mobile Products at Amadeus Software Labs, Bangalore. She spent her youth studying people and philosophy, which she now applies to create enchanting experiences for users. At Amadeus, she is shaping the future of travel through UX Strategy, Interaction Design, and User Research. Parimala has experienced the transition from web to mobile and emphasizes the need for Design Thinking to create great products. She is an active speaker at international conferences like CAST 2014, STAREAST 2015, STARWEST 2015, Tech Summit 2015, UX Day 2016, We Test 2016, Oredev 2017 and UX Day 2018. She has delivered keynotes at SAP, Google, We Test, Assuring Consulting NZ, and at many corporations/universities. Gleaning from her experiences at Oracle, McAfee, and scores of startups, she crafts interesting stories on Mobile User Experience and Product Management on her blog (https://www.linkedin.com/today/post/author/posts#published?trk=mp-reader-h). You can follow Parimala on twitter (@PariHariprasad), or LinkedIn (http://linkedin.com/in/parimalahariprasad).


Strap yourselves in and prepare to witness a live, no holds barred, ragin' cagin' exploratory testing experience. Watch as Adam explores and investigates a mystery piece of software with only his wits, charter and some trusty heuristics to guide him. Throughout the live session, a play-by-play commentary will help you get inside the mind of a skilled exploratory tester and pick up some specific and practical tips on how to consciously interrogate a system, but also help grow your exploratory testing vocabulary so you can better communicate your techniques and your observations.

Takeaways: A live exploratory testing demo will enable the audience to see the following ideas and techniques applied practically and in real time:

  • rapid and visual creation of a testing model
  • conscious and deliberate application of a range of exploratory heuristics
  • credible yet accessible real time reporting on the state of a system
  • effective and lean documentation of an exploratory testing session
  • application of numerous tools and tricks from the utility belt of an explorer

Adam Howard
Image

Adam Howard is the Test Practice Manager at Trade Me; New Zealand's largest marketplace. Passionate about evolving the way testing is perceived and performed, Adam is a regular speaker at conferences and meet-ups both in New Zealand and internationally.

Adam also helps to organise local WeTest events, is the program chair for the 2018 WeTest Conferences, and was a founder and chief designer for Testing Trapeze [testingtrapezemagazine.com], the Australiasian testing magazine. He also keeps a fairly irregular blog [solavirtusinvicta.wordpress.com], and occasionally manages to be concise enough to tweet as @adammhoward [twitter.com/adammhoward].


Is your team sleeping towards failure? How do you "keep it real" in software engineering?

Using ideas from anthropology and postmodernism I theorise how some cherished concepts in software are difficult but seductive abstractions - and how not managing this causes software teams to drift to failure.

The talk is an abridged version of my blog article “Some Trust in Models! On Simulations, HyperNormalisation and Distorted Reality in Software Teams (Part 1)” http://testingrants.blogspot.com.au/2017/12/some-trust-in-models-on-simulations.html

In this talk I look at why projects can fail dramatically or release poor products after what appears to be smooth sailing beforehand. I theorize that this is because the project team members have trapped themselves in a “reality" that has no basis on the “true" status of the project, based on an inability to reconcile contradictions in some of our key concepts in software engineering.

To study the above I cover the pioneering work of Russian anthropologist Alexei Yurchak, who created the term "Hypernormalisation" to show how self-reinforcing propaganda and a refusal to see the collapse around them made Soviet citizens sleepwalk into the collapse of the Soviet Union. I tie this together with the work of postmodern sociologist Jean Baudrillard, who showed how society interacts with the world through increasingly abstracted simulations of reality - up to the point of where they have no relation with the real world.

Key Takeaways from the talk:

  • An example of application of tools from anthropology and postmodern sociology to software engineering processes.
  • A view of the various human biases and contradictions behind the concepts of quality, requirements, metrics and processes.
  • How these affect and induce a "mini-hypernormalisation" in troubled software teams that mean they are removed from reality, sleepwalk into and even contribute to the failures of their projects.
  • Ideas about how teams can prevent this.

Paul Maxwell-Walters
Paulw

I'm a British software tester based in Sydney, Australia with about 10 years of experience testing in financial services, digital media and energy consultancy. I am a co-chair, social media officer and an occasional speaker at the Sydney Testers Meetup Group along with speaking at other conferences in Australia. I blog on issues in IT and testing at http://testingrants.blogspot.com.au, along with having contributed an article to Testing Trapeze Magazine. I tweet on testing and IT matters at @TestingRants https://twitter.com/testingrants .


Test management in an Agile organisation with embedded testers can be a challenge that many companies respond to differently. Georgia de Pont shares how the test practice at Tyro Payments is experimenting with a grass-roots management style - in the absence of a test manager or test leads - to spearhead improvement initiatives within the test practice. She outlines how the Test Practice Representative Group experiment started and how the group works as a team. Georgia then discusses some of the initiatives that the group has implemented, and issues addressed, at Tyro Payments. These include improving the recruitment process for test engineers; performance criteria for test engineers, and providing clarity to the rest of Tyro engineering about the role of an embedded tester in the delivery team - including some of things the Test Practice Representative Group got wrong in the process. Throughout, Georgia discusses the improvements the group has made to work more effectively, the positive impact on the test practice this experiment has had, and provides insights that would help testers in other Agile organisations implement a similar representative group.

Takeaways:

  • An alternative to matrix management for testers in agile organisations - one that can provide stretch goals for test engineers in a flat hierarchy.
  • Information about how this could be implemented in similar test practices in their organisation, and the types of initiatives that could be owned by such a group.
  • An approach that has proven effective at creating an engaging community of practice in an engineering organisation organised according to the Spotify model.

Georgia de Pont
Georgia de pont

I started working as a software tester after I finished my psychology degree at university, firstly in New Zealand, then in Australia, and I've been passionate about software testing ever since. I'm currently a Test Engineer at Tyro Payments, where I work in an awesome delivery team on our deposits product. I'm active in the software testing community, where I regularly attend MeetUps, I've written for Testing Trapeze a couple of times, and I occasionally pop up on Twitter (@georgia_chunn) to chat with other testers.


Testing is changing, and you've changed along with it. You've learned new skills, you've shifted left, you’re keen to collaborate, you’re prepared to pair. But those around you don't seem to have adjusted to the idea of this brave new world. You have things to say but it seems nobody wants to listen. Now what?!

Over the past few years working closely with software testers, I have observed that many of them have great skills and knowledge, yet the rest of their Agile team don't take advantage of this. The outcome is the same old-school pattern where the tester is only expected (allowed?) to perform a limited set of tasks.

In this session I will outline ways a tester can be a change agent for their team - even when they may feel shy, introverted or lacking in the power to do so.

Takeaways: You will leave with strategies you can use to

  • build credibility and trust;
  • identify allies; and
  • share your knowledge
in order to position yourself as an essential "go-to" member of your Agile team.

Michele Playfair
Michele cross 3085 cropped

Michele has held a variety of roles across the spectrum of software development and is currently an Agile Team Facilitator at Xero. She plunged into the wonderful world of software testing a few years ago when she was unexpectedly invited to become a test manager. Since then she has learned a lot from the testing community and is now always on the lookout for ways to use this knowledge to help teams become more awesome.


Imagine the impact you could have in improving the attitude and awareness towards quality in your organisation if you had developers actively fighting alongside you! Imagine having a developer speak up in the daily standup to question the extent and variety of testing done on a task. Imagine the impact of spending time each week, training and teaching developers in testing and quality related topics. Imagine all that you could do with developers actively raising the profile of quality in your organisation.

Imagine no more, you can have developers sharing articles with their team on how to do different types of testing, new tools to research, quality considerations to make while coding and more. You can have developers feeding back the quality concerns and struggles of the product team to the quality team. You can have developers taking regular time out of their week to think of and act on ways to improve quality. You can have developers to explicitly focus training efforts and exercises on, to boost the skills and understanding in regards to quality in the team. All this will lead to shorter development life cycles, less bugs created, and an overall better customer experience through the heightened awareness of quality. This is exactly the purpose of a Quality Champion and you need them in your organisation.

In this talk I discuss the Quality Champions program I created to do bring all these benefits, and more, to life. I explain how I define a Quality Champion, how I select them and train them, and how I work with them to improve quality. I provide examples of the positive changes that have been achieved as a direct result of the Quality Champions in my workplace. I demonstrate how similar benefits could be found in your workplace and I give you all the resources you need to make your own Quality Champions program successful, including how to get the buy-in from management to even consider starting.

Takeaways:

  • Learn about the Quality Champions program and how it could be beneficial to you.
  • Understand all the steps and resources you need, in order to make the program successful and receive management buy-in.
  • Learn how to create a deeper relationship with developers, enabling a deeper and wider impact of your quality efforts.
  • Learn how to teach developers about quality, how to test their own code and others code, how to ask questions that testers ask and more.

Peter Bartlett
Pete headshot

Peter Bartlett is the Quality Practice Lead at Campaign Monitor. He is in the middle of a big restructure of how quality and testing are viewed and implemented within the organisation and is keen to share this journey for others to learn from. He is a global lead of Quality across 3 companies, with responsibilities over an international team of testers. He also have vast experience in the daily testing requirements of a dedicated tester. He has over 8 years experience in testing, with strong manual and technical experience, he loves problem solving, and finding new and better ways to do things. He is an international speaker, tweets at @pete_bartlett, and blogs at http://thesneakytester.com


We have been lucky enough to work with a not-for-profit organization called EPIC Recruit Assist to offer software testing training to a group of young adults on the autism spectrum, through a new initiative known as the “EPIC TestAbility Academy”.

In this talk, we will describe:

  • What is the autism spectrum and why it aligns with software testing
  • How this relationship came about
  • Why it is important to seek opportunities that drive diversity
  • Key relationships, shared goals and building the vision
  • The importance of communication
  • Building a balanced programme
  • Preparation that was required to get the programme off the ground
  • Lessons learned in finding participants
  • Lessons learned delivering the training
  • The outcomes for the candidates on completion of the programme.

Takeaways:

  • What autism is and how those with autism can be particularly suited to IT work.
  • Why you should think about reaching out and helping to grow the IT community.
  • An insight into how you can go about getting a program up and running and the problems you might encounter.

Paul Seaman

I am currently a Senior Test Engineer with Travelport Locomote. I have 18 years test experience across a number of domains. I'm a passionate context driven tester who is actively involved in the testing community. I'm a reviewer for Woman Testers (well former reviewer now) and Testing Trapeze magazines. I’ve also written a number of articles for various test magazines. I was a founding partner of the Australian Testing Days 2016 conference and Assistant Program Chair for CAST x18 Melbourne. I am a co-founder of the Epic TestAbility Academy where I co-facilitate classes that teach people on the autism spectrum to test software as a means of finding employment. On most summer Saturdays you’ll find me out on a cricket field umpiring senior cricket in the Victorian Turf Cricket Association competition. I write the occasional blog at https://beaglesays.blog/ and on Twitter I use the handle @beaglesays.


Lee Hawkins

The Principal Test Architect for Quest Software, based in Melbourne, Australia, Lee Hawkins is responsible for testing direction and strategy across their Information Management business.

In the IT industry since 1996 in both development and testing roles, Lee’s testing career really started in 2007 after attending Rapid Software Testing with Michael Bolton. Lee was the cofounder of the TEAM meetup group in Melbourne and co-organized the Australian Testing Days 2016 conference. He is a co-founder of the EPIC Testability Academy, a software testing training programme for young adults on the autism spectrum. Lee was the Program Chair for the CASTx18 testing conference in Melbourne.

He is a frequent speaker at international testing conferences and blogs on testing at Rockin’ And Testing All Over The World. When not testing, Lee is an avid follower of the UK rock band, Status Quo; hence his Twitter handle @therockertester.


At WordPress.com we constantly deliver changes to our millions of customers - in the past month alone we released our React web client 563 times; over 18 releases every day. We don’t conduct any manual regression testing, and we only employ 5 software testers in a company of ~680 people with ~230 developers across . So, how do we ensure our customers get a consistently great user experience with all this rapid change?

Our automated end-to-end (e2e) tests give us confidence to release small frequent changes constantly to all our different applications and platforms knowing that our customers can continue to use our products to achieve their desires.

Alister will share how these automated tests are written and work, some of the benefits and challenges we’ve seen in implementing these tests, and what we have planned for future iterations.

Takeaways: How to effectively use automated end-to-end testing to ensure a consistent user experience in high frequency delivery environments

Alister Scott
Image1

Alister is a software quality engineer for WordPress.com at Automattic. He has extensive experience in automated software testing and establishing quality engineering cultures in lean cross-functional software development teams. He writes a software testing blog at watirmelon.blog.


Micro Sponsors: