TestBash San Francisco 2018

Testbash san francisco 2018 best test west dojo event banner
November 8th 2018 08:00 - November 9th 2018 18:00

The best in test is coming west! The US edition of our software testing conference TestBash is heading to the home of the Golden Gate Bridge, San Francisco. Our first TestBash in the US was in New York followed by two epic years in Philadelphia, but it’s time for a change.

TestBash San Francisco has been created in partnership with the sensational Ash Coleman and Angie Jones.

It's a two-day event in our single track format consisting of 17 talks, a few surprises and, of course, our famously fun 99-second talks. The event is taking place on the 8-9 November 2018.

On both days you can expect a wonderful community to come together in a friendly, professional and safe environment. We think you'll feel right at home when you arrive!

Super Early Bird prices are just $899, book using the 'Register Now' button below. We look forward to seeing you in San Francisco at the amazing Cowell Theater.

Event Sponsors:
Conference
November 8th 2018 08:00 - November 9th 2018 18:00

Techniques for Generating and Managing Test Data
Omose Ogala

Many times, automated tests fail because of assumptions about the state of the application; essentially making test data management a headache for teams. This is especially problematic when working with applications whose data is dynamic.

In this session, Omose, a recent college grad, will share a fresh perspective on test data management. He will discuss techniques that he uses to generate reliable test data within a fraction of a second, hacks that he places into production code to make the application more testable, and tools available within his company that make testing easier.

These techniques prove particularly useful for testing and automating against applications that are dynamic in nature.

Omose Ogala
Omose ogala Omose Ogala is a recent Computer Engineering graduate of the University of Rhode Island. He now works as a Software Engineer in Test at Twitter, where he develops test automation scripts for the company’s iOS and Android apps. In his spare time, Omose creates iOS apps for the company he founded, Ogala Apps, which currently has eight offerings available in the App Store.

AI Means Centralized Testing Is Inevitable
Jason Arbon

The most radical change to software development, thanks to AI, will be the centralization of all testing. Just like the world really only needs one social network, one shopping site, one search engine, the world will only need one centralized testing service thanks to AI. Today, testing is distributed to small, per-app test teams. These teams use their intuition and experience to define their test cases and strategy. They develop their test automation independently and from the ground up for each and every app. That all changes with AI. Jason shows how AI enables cross-app reusable test cases, test data, test plans and test strategies. Jason also shows how centralization of these tests, test execution, and test result analysis means that the centralized services will be far smarter, drastically faster, and much less expensive than any given dedicated team. They will even be able to benchmark the quality of your app versus your competitors. Centralized testing services will capture the collective knowledge of all testers and apply that learning via AI to all other apps in a virtuous cycle. Sure there will be room for a little customization, but 90% of testing will be centralized in the coming years. Jason shows you how to get ready for this change and how AI makes it inevitable.

Jason Arbon
Jason

Jason Arbon is the CEO of test.ai which is redefining how enterprises develop, test, and ship mobile apps with zero code and zero setup required. He was formerly the director of engineering and product at Applause.com/uTest.com, where he led product strategy to deliver crowdsourced testing via more than 250,000 community members and created the app store data analytics service. Jason previously held engineering leadership roles at Google and Microsoft and co-authored How Google Tests Software and App Quality: Secrets for Agile App Teams.


Extra! Extra! Automation Declared Software!
Paul Grizzaffi

Breaking news! Automation development is software development. Yeah, it’s true. Even if we are using a drag-and-drop or record-and-playback interface to create that automation, somewhere, in the stack, under the hood or behind the curtain, there is code sequenced by our actions. We must start treating our automation initiatives as software development initiatives, lest we end up in a quagmire of unsustainability and early project death.

Automation activities that aren’t treated as software activities run the risk of being underestimated, delivered late, and being difficult to maintain; each of these scenarios takes a bite out of our budget. Join us as our speaker explains why automation really is software and the key points of software development that we should keep in mind when creating automation software, such as encapsulation, abstraction, DRY, and YAGNI.

Takeaways:

  • Treat automation development as software development
  • Where appropriate, follow the lead of software development teams’ practices such as coding standards and code review
  • Code documentation should tell “why”, not “what” or “how”
  • Focus on minimizing maintenance

Paul Grizzaffi
Paul headshot outside

Paul Grizzaffi is a Principal Automation Architect at Magenic. His career has focused on the creation and deployment of automated test strategies, frameworks, tools, and platforms. He holds a Master of Science in Computer Science and is a Certified ScrumMaster from Scrum Alliance. Paul has created automation platforms and tool frameworks based on proprietary, open source and vendor-supplied tool chains in diverse product environments (telecom, stock trading, E-commerce, and healthcare).

He is an accomplished speaker who has presented at both local and national meetings and conferences. He is an advisor to Software Test Professionals and STPCon, as well as a member of the Industry Advisory Board of the Advanced Research Center for Software Testing and Quality Assurance (STQA) at UT Dallas. Paul looks forward to sharing his experiences and expanding his automation and testing knowledge of other product environments.


Power of Models
Dan Ashby & Richard Bradshaw

If you’re testing, you’re modelling. Models are everywhere, we use them all the time while testing. The majority of the time you may not be aware you are using a model, but you are, trust us. Your system knowledge, understanding of your development life cycle, visualising new feature requests all use models. However, the real power of models comes to fruition when you attempt to share your mental models with others, specifically when you try to codify them or visualise them.

Attempting to codify a mental model is a journey of discovery, a cycle of questioning yourself until you think you’ve translated all those neurons and their connections to your chosen medium. It’s usually a very fruitful journey for you and more importantly the people you’re trying to share your ideas with. Depending on the complexity of the topic, it could be a process that lasts anywhere from a few seconds to a few hours, potentially longer but we try to avoid that, which we’ll cover in the talk. Once done though, you'll have a powerful tool at your disposal.

This is a journey Richard and Dan have taken on many occasions, producing numerous models that have got two people talking, a whole team, and even whole communities. We have our preferred mediums, Dan favours pen and paper, or his trusted Windows Surface, whereas Richard is drawn to whiteboards. The medium aside, we want to share with you some of our models, specifically focusing on the journeys and the outcomes.

For each example, we’ll explore why we created them, the creation process, how we used them and invite you to evolve them. All these examples will be woven in between sage advice about modelling. We’ll discuss the benefits such as having people on the same page, by discussing the same visual model, alongside some drawbacks such as trying to achieve the perfect model, spoiler alert, no model is perfect!

Takeaways:

  • How a simple crude model can bring multiple ideas and conversations into a single stream
  • How a simple model can reduce the perceived complexity of a problem and accelerate progress
  • How a simple model can lead to wonderful conversations
  • Real experience reports of using models that they can harness to create their own when back in the office
  • An appreciation of Dan’s drawing skills

Dan Ashby
Danashby Dan is a SW Tester and he likes Porridge! (and whisky!)
Richard Bradshaw
Richardbradshaw Richard Bradshaw is an experienced tester, consultant and generally a friendly guy. He shares his passion for testing through consulting, training and giving presentation on a variety of topics related to testing. He is a fan of automation that supports testing. With over 10 years testing experience, he has a lot of insights into the world of testing and software development. Richard is a very active member of the testing community, and is currently the FriendlyBoss at The Ministry of Testing. Richard blogs at thefriendlytester.co.uk and tweets as @FriendlyTester. He is also the creator of the YouTube channel, Whiteboard Testing.

Influence > Authority and Other Principles of Leadership
Elisabeth Hendrickson

You're a leader even if you don't think you are. Everyone is. But not everyone realizes it. Like Dorothy stuck in Oz, you have had your red shoes with you all along. In this talk, we'll look at what leadership is and isn't, why influence is much more important than authority, and how to wield and grow your influence to have a positive impact on your organization. Along the way you'll learn how to leverage a set of underlying core principles such as: Don't Be Nice, Be Kind; Shave the Right Yak; To Fix a Problem, First Make it Visible; Fear is a Lousy Compass; To Increase the Intelligence of an Organization, Increase the Connections Within It; Positive Feedback Is More Powerful Than Criticism; and Shifting a Boundary a Few Inches Can Drastically Change an Outcome. Oh, yes, and of course there will be stories. So. Many. Stories.

Elisabeth Hendrickson
Elisabeth headshot Elisabeth Hendrickson is a Vice President of R&D for Pivotal, focused on data products. Known as @testobsessed to many, she has been kicking around the software industry for nearly three decades. In that time she's held roles as a developer, tester, agile enabler, consultant, technical writer, project manager, and cat herder. She joined Pivotal 5 years ago to work on Cloud Foundry after running her own small consulting practice for over a decade. A recognized leader in agile software development, she received the prestigious Gordon Pask Award from the Agile Alliance in 2010. She is also the author of Explore It! from Pragmatic Bookshelf.

Tester at the Table and the Tester in My Head
Adrian P. Dunston

There’s a little tester in my head. She advises on unit tests, warns against dodgy deploys, or just says ‘I can break that.’ There have also been testers at my table. They have demonstrated diligence, patience, and guile, and wisdom. I have my testers at the table to thank for the tester in my head.

Do you want to make an impact? Do you want your work to reach people far and wide? Do you how to create this far-reaching impact?

Adrian P. Dunston has had opportunities to work with dedicated and talented QA professionals. And now he has a little tester in his head. The tester in his head advises, cajoles, and admonishes. And she makes Adrian a better developer.

As the tester at the table, you have an opportunity to set high standards of quality not only in products and processes but in the developers you work with. And by building quality developers, your impact can reach every project and every other developer they work with thereafter. At the very least, you can make life easier on the next poor sap that gets to QA their code.

How does one go beyond improving products and start improving teams? How do you build up the little testers in their heads? In this talk, you’ll learn:

  • How to mold quality-minded developers
  • How to use repetition and story to get your ideas stuck in their heads
  • How to instill habits they can use and take with them
  • And why exposure to good QA person can be invaluable to a young developer

Adrian P. Dunston
Adrian dunston Adrian P. Dunston has worked in companies of all sizes as a software developer, data architect, scrum master, manager, and briefly founder. His hobbies include cross-referencing several dozens of self-help and leadership books which he quotes endlessly to his amazing wife and fabulous children. Adrian is passionate about neurodiversity, using computers to help people, and storytelling. He also takes extensive notes.

How to Test Serverless Cloud Applications
Glenn Buckholz

Cloud providers are now offering serverless technology, introducing significant changes to how applications are structured and, importantly, tested. The serverless cloud makes certain parts of testing serverless applications opaque. Glenn Buckholz explains the boundaries of each cloud provider’s black box service to expose what can and cannot be tested ahead of time, and what can be evaluated locally and what requires the cloud provider’s platform. Join Glenn as he focuses on answering key testing questions for serverless cloud applications: How and where do I do unit testing? What security testing can I do? How do I implement automated testing? Where do I find my logs/errors? and Can I use CI/CD to speed up my test cycle? Leave with the information you need to create serverless application test plans for AWS Lambda and Azure Functions that allow you to conserve precious, limited testing resources.

Takeaways:

  • What exactly is serverless infrastructure.
  • How does it affect testing.
  • Can automated testing be applied to serverless infrastructure?
  • How I as a tester ensure the people architecting serverless take testing into consideration.

Glenn Buckholz
Glenn buckholz As a Technical Manager at Coveros, Glenn is responsible for executing on delivery of value to the company's largest customers. Glenn leads highly technical teams composed of different partners to achieve client goals, and has transformed several large projects from the waterfall development model to DevOps.

Quality Assurance in an Ab-test Driven Company - Why Companies of Tomorrow Need Qa Superstars, and How to Become One
Antonia Landi

What is the future of QA?

Some of the biggest companies today use AB tests to increase conversion, retention and a myriad of other vital business metrics. AB tests have already been adopted by countless organisations, and are rapidly becoming the cornerstone of any business-driven venture.

So how and where does a tester fit into that? How can you maintain a high level of quality if there are several versions of your product, all of which interact with one another and change the user experience at crucial points? How should you structure your QA department within an organisation that is focussed on constant delivery and iteration of AB tests?

Asana Rebel is a health & fitness mobile app that has seen incredible growth since its creation over two years ago thanks to aggressive AB testing methods.

Approximately 70% of what we build in any given Sprint will be discarded within 2-4 weeks. The QA department has to be as dynamic (if not more so) as the rest of the company. So how is this done?

As Asana Rebel’s first QA Manager (and up until recently, sole tester) I will share my experiences, insights, and difficulties in building and adapting a QA department within a company that strives to test every hypothesis. I will talk about how I needed to change my perceptions of what makes a great QA department as well as how breaking brand new ground enables me to take part in innovating not only my role within Asana Rebel, but also the role of QA within any AB test driven company.

In this talk you will find out why test plans no longer make sense, why the role of QA needs to grow beyond finding technical faults, and how to stay on top of the chaos.

In this talk attendees will learn:

  • Why a traditional approach to QA won’t be suited to an AB-test driven company
  • What the advantages and disadvantages are of having to be extremely flexible
  • Why the role of QA must grow beyond that of someone who finds and reports technical faults

Antonia Landi
Antonia landi

Having majored in English Literature & Journalism and having lived in four different countries so far, my background is eclectic, to say the least. From starting out as a Games Tester for the mighty Rockstar North to setting up QA processes for small Berlin Startups, I’ve learned so many things along the way. Currently, I work at Asana Rebel, a rapidly growing health & fitness Startup, where I act as a QA and Project Manager.

In my spare time, I like to spend time with my husband, hang out with my cats and - of course - play games.


Going Undercover in the Mob
Jasmin Smith

The talk will cover my experience participating in mob programming as a tester without a strong technical background. At first, it made me feel empowered, but eventually, imposter syndrome kicked in and I felt like I wasn't contributing enough value to the team. But my developers pulled me back in. Throughout the process, I learned a great deal about programming, the system under test, and about empathy.

Attendees will see that mob programming is an activity that everyone can participate in and add value, regardless of your background. Not only can you add value, but you also gain a lot through participation. Working with a supportive team can help you overcome obstacles like imposter syndrome.

Jasmin Smith
Jasmin smith Jasmin Smith has been working in software testing since 2008 and is currently employed by Cox Automotive as a QA Lead in their Media division in Dallas, TX. She also works as the co-chapter leader and teaches classes for the DFW chapter of Girl Develop It.

Stories from Testing Voice First Devices, Such as Alexa
Kim Knup

“Alexa, how do I test a voice skill?”

“Sorry, I don’t know that one”

If Alexa doesn’t know, how are you supposed to know?!

The Alexa companion app, which is required to set up Amazon's Echo devices was the top app for Android and iPhone on Christmas day. This is a strong indicator that these voice first devices were the top gift in 2017 and sales aren’t slowing down.

Amazon doesn’t disclose official device sales, but said in a press release that “Amazon Devices also had its best holiday yet, with tens of millions of Alexa-enabled devices sold worldwide.”

This means the voice skill customer base is growing, and more and more companies are looking into breaking into the voice market to engage with their customer.

Having had the opportunity to work at an agency that prides itself on being pioneers of new tech, I have definitely seen this trend with us developing several Alexa skills at the same time in early 2018.

This talk will tell the tale of how a mobile development team, used to working on mobile apps, had to change their design and product thinking, especially when it came to testing the new voice products.

We’re no longer developing throw away proof of concepts. Customers want working and engaging voice skills with real life applications, for their users.

So how do you go from testing a mobile application to testing a voice skill? It can feel really daunting to not be using a keyboard, but instead be shouting through a room testing something.

What are the things to consider? Do you need to be able to do accents? (Note - having a diverse team helps massively already.)

But your mobile testing experience is not lost. Understanding voice design and the platform that will be used (Alexa vs google assistant) is the first step.

Testing can then be split into several, hopefully familiar, stages:

  • Test the design, question workflows
  • Manual testing using a variety of tools - voice device simulators
  • Unit testing the code
  • End to end testing using the device - do the user flows work, what happens when they don’t?
  • Continuous testing - mostly using monitoring tools - What are your users actually saying?

This session provides an overview of my experience testing voice apps, specifically focused on Alexa, and what tools are available to you. How testing a voice skill may and may not differ from more conventional testing and by knowing how the pieces fit together you can really help inform your testing approach. Also sometimes all you need is a whiteboard and another person to get started.

Takeaways:

  • The Alexa platform basics: These are the concepts, definitions, and underlying architecture you need to know before testing Alexa voice apps.
  • Ideas for different test phases for an Alexa skill: Some of the most useful testing we did was before a line of code was written.
  • Overview of test tools we used - from pens and notepads to the Alexa developer portal to simulators.

Kim Knup
20160420 dsc 2931

Kim is a tester and co-organisers of the Brighton tester meet-up; #TestActually and the Software testing clinic. She is passionate about usability and likes to do what the user (apparently) would never do.

Over the years she’s worked in linguistic games testing, worked with big data archiving and asset management tools as well as ticketing systems. She has also recruited and lead a small team of testers. Her interests range from usability testing, to accessibility, to performance testing, as well as using tools to aid exploratory testing.


Creating a Culture of Quality Assurance
Angela Riggs

Quality Assurance isn’t just a set of tools or processes - it’s a mindset, and a culture that the whole company has to accept and be involved in before quality assurance can be implemented and upheld.

I’ll offer ways that QA Engineers can begin introducing quality assurance to their engineering teams, as well as recommendations for communication and onboarding, and methods for getting engineering buy-in to new tools or processes. These include:

  • Communicating “why” and “how” to your engineering teams before implementing change
  • Lowering the barrier of entry to using new tools, such as setting up Makefiles or bash scripts
  • Pairing with engineers to help them ramp up on new processes

These methods of communication, change management, and mindful iteration will allow you to build trust around the purpose of quality assurance - most importantly that its intent is complementary, not competitive. That trust will help your engineering teams accept and embrace the mindset and culture of quality assurance.

I want the audience to come away from my talk feeling comfortable introducing and iterating quality assurance to their engineering teams - how to communicate the needs and purpose of QA, how to ramp up engineers using the tools and systems, and how to help create the mindset of quality in their teams.

Angela Riggs
Angela riggs

As a QA Engineer, Angela's role is at the intersection of Tester, DevOps, Architect, and Scrum Master. She approaches her work with determination and a positive energy, holding herself and her team to a high standard. Angela has an enthusiasm for learning and advancing, and appreciates that her field calls on her curiosity and attention to detail. She believes in the tenet of people over process, but also enjoys creating useful foundational processes that promote shared understandings around development workflow.

Outside of work, Angela spends time her exploring Powell's or one of Oregon's many beautiful hiking trails. She's also an enthusiastic karaoke singer (Motown Philly and Total Eclipse of the Heart are her standards), and enjoys long debates on what exactly can be defined as a sandwich.


Manual Regression Testing Manifesto
Brendan Connolly

In an agile world where having dedicated testers can be a controversial idea, manual testing can be a tough sell. Not all contexts have their regression testing automated, so what is a manual tester to do when it comes time to release? Your team starts asking about regression testing, what testing is required and asks for estimates and expects justifications for the time being spent. Intuition isn't the answer, retesting everything is not an option.

We need a set of core values to serve as a heuristic foundation for understanding and communicating about regression testing. Inspired by the agile manifesto I'll identify 5 core values that testers can use to focus their regression testing efforts:

  • Consistency Over Correctness
  • Behaviors Over Bugs
  • Intent over Implementation
  • Conformity over Complexity
  • Common over Complete

These values will define a clear intent and context for regression tests. This clarity will allow testers to easily identify and express their goals and intentions when performing regression tests and highlight its difference from feature testing.

This talk will provide a lens testers can use to focus their regression testing into efficient and explainable actions and outcomes. Testers will be able to compare and contrast feature and regression testing.

The core values will help easily answer questions like

  • What tests are you performing?
  • How do you decide? Why?

Managers, developers, and other team members will get insight into the motivations a tester brings to regression testing. They will also get familiarity with actions and outcomes they expect to see from their testers during regression, leaving them better equipped to support their testers efforts.

The session will begin with a definition of regression.

Next, I will give a brief overview of the agile manifesto and the four values contained within it.

Then I will set a foundation for the need for a manual regression testing manifesto by highlighting:

  • that the least experienced team members are tasked with regression testing
  • the subtleties of testing and its phases are not always intuitively obvious to non-testers
  • testers intentions and actions need to be transparent to be respected

I will then begin presenting the manual regression core values and for each of the 5 I will:

  • define each term
  • relate the definition to its role / impact on testing
  • provide tangible steps or insights testers can utilize to frame their actions for communicating with their teams.

I'll wrap the session up with a call to action for testers to be more than just a mindset, to be true ambassadors of quality, through communication and skills.

Brendan Connolly
Brendan connolly Brendan Connolly is a Software Design Engineer in Test based out of Santa Barbara, California with over 7 years of testing experience in a variety of different roles. He writes tests at all levels from unit and integration tests to API and UI tests and is responsible for creating and executing testing strategies while using his coding powers for developing tooling to help make testers lives easier. He maintains a testing blog @ http://www.brendanconnolly.net/ and has published articles on Testing Circus, TestHuddle, and QA Intelligence Blog. He also tweets testing related thoughts on twitter @theBConnolly.

Testing the Front-end, Back-end, and Everything in Between
Bria Grangard

It’s important to remember that there are two parts of an application—the front-end and the back-end. And it’s important to test them BOTH. Testers are always looking to gain confidence in their latest releases while also saving resources to do so. Wondering how this is possible when you need to test every component of the application and all of their interactions? Let’s talk! Both the front-end and back-end of applications are often discussed independently with regards to their importance, however in order to guarantee success of an application, you must test both. Bria Grangard discusses how to successfully implement a strategy for testing both the UI and API sides of your application. Learn how you can reuse work you’ve already completed, effectively test the critical components of your application, and save time, money, and effort.

Attendees walk away with an end-to-end testing strategy for success and best practices for maximizing test coverage to share with their team.

Bria Grangard
Bria grangard Bria Grangard is a subject matter expert in the software testing field. She manages SmartBear Software’s testing products which include award winning automated testing tool, CrossBrowserTesting, TestComplete and QAComplete test management. A popular speaker on topics such as accelerating your attest automation, and best practices for managing test cases, Bria loves to speak to testers in all industries with all software development styles, educating them on ways they can speed up their release cycle without compromising the quality of their applications. She has earned three degrees from Dartmouth College, including two bachelor degrees in engineering and a master’s degree in engineering management.

The Joy of Monitoring Or: How I Learned to Stop Worrying and Test in Production
Amber Race

One of the goals of testing is to provide confidence for the release of a feature or product. But do you *really* know your if feature will work well in production? *Really* really? And how do you *know* you know? Too often, we do all our testing in sandboxes and then cross our fingers, hoping everything will work out fine on release. But with current monitoring tools available, there is a better way!

This talk will draw on my experiences adding tracing to production services; deciding what to track, how to display and analyze data, and using that analysis to do testing in the actual production environment. With the right monitoring in place it is possible to run realistic load tests, find issues that might have been missed in lower environments, and have greater confidence in the impact your change will have on your actual real-world users.

Takeaways

  • What monitoring and logging tools are available
  • How to decide what to track
  • Ways to create meaningful reports with monitoring results
  • How to use production data to create more realistic tests
  • Strategies for using real-time monitoring to run tests in production

Amber Race
Amberrace headshot

Amber Race is a Senior SDET at Big Fish Games.

After majoring in Asian Studies, teaching in Japan, and travelling the world, she stumbled into software testing and has been loving it ever since. She has over 15 years of testing experience at Big Fish and Microsoft, doing everything from manual application testing to tools development to writing automation frameworks for web services.

Amber has worked on a wide variety of products and written automation in C#, C++, Python, and Java.

She currently specialises in test automation and performance testing for high volume back-end services supporting iOS and Android games.


Climbing to the Top of the Mobile Testing Pyramid
Rick Clymer

Planning to test a mobile application can be quite a confusing time. Do I need to buy every device my users are using? Can’t I just shrink my browser down and trust the responsiveness of what’s displayed? Do I really need to worry about how a phone’s hardware can affect the way my app will work? These are all valid questions. Using the mobile testing pyramid to guide our testing efforts allows us to be more efficient about our testing (and maybe development) efforts. In the end, all of our focus on making sure we get a quality product to our customer’s hands should be our top goal. We can all efficiently achieve that by using the mobile testing pyramid.

Most recently, my team’s effort to produce a hybrid application has allowed us to reimagine our testing process for our mobile solution. Not knowing it at the time, we were following a pattern introduced by Kwo Ding, the Mobile Testing Pyramid. This pattern focuses on the heaviest level of testing happening at the bottom of the pyramid on browsers and api level testing. Next, mobile device simulators and emulators get involved to start giving us the real life usage. Finally, real devices are used to get the full real experience your end user will deal with. We will discuss the advantages and disadvantages of each level as well as some tips and tricks we have learned across the levels. While the focus of this talk is not on how we automated our solution, we will spend a bit of time at each level discussing how we applied our automated solutions and the tools we used to help.

Takeaways:
My hope is that with the real life examples in this talk of how we have used the pyramid, you will be able to walk away with a better idea of how to be more efficient at the different levels of the mobile testing pyramid.

Rick Clymer
Rick clymer Rick is a QA Lead at OnShift. Since falling into this idea of testing over 6 years ago, Rick has gotten to play multiple roles on the product and engineering team. From manual tester to subject matter expert to self taught automation engineer to getting a Jenkins environment setup to run the automation on, he is so grateful to have fell into this concept of testing that he was unaware of existed prior to starting in it. While Rick hates snakes, he loves Python. He is also an avid Cleveland sports fan.

How to Defuse a Bomb... Wait, I Mean a Bug
Michele Campbell

Have you ever felt the incredible pressure of trying to get a story tested in a very short deadline? When last the last time your PM said this has to go out tomorrow? How about developers demanding their story gets tested immediately even though there are five others in front of it? The pressure feels like trying to defuse a bomb especially when you’re balancing the emotions of everyone around you, the almighty customer, and the business values. You know the story needs to get out, but you are stressed about that bug hiding waiting to explode right when it gets to the end user. The first step is preventing the bomb from going off in the first place. This is where the most testing effort is placed. However, sometimes an issue escapes into production, and you need to know how to defuse it quickly when it does.

Expect to learn some useful processes such as:

  • pair testing
  • session-based testing
  • bug scrubs
  • daily bug triages
  • QA standups & planning
  • cross team testing
and a few other testing tricks to help you test faster, get a better product to your users, and prevent those awful bug bombs from sneaking into production.

Testing is most often seen as the end of a process. However, it’s critical to begin testing from day one of a project. You want to start when UX and PM's are designing and follow through to when you know the end user is satisfied. Comparing the scrum team to a bomb team is how I hope to enforce the mentality. I want to do an introduction of the following methods to help reach the goal of making testing a priority from start to finish with the whole team:

  • Pair Testing: Pair testing is when testers take time before a story is finished to sit side by side with developers on their stories early on in the process. Basically as soon as code is working, a tester is sitting next to a developer going through the story. The big bonus of pair testing is that it gets the developer to start thinking like a tester. Other bonuses include less context switching for developers, finding bugs sooner, and creating meaningful relationships with other people on your team.
  • Session Based Testing (SBT): To improve knowledge in specific areas of a very large product, we use SBTs to track our tests and go over them with fellow testers. SBTs have improved the onboarding experience of our newest hires and the capabilities of more senior staffers.
  • Bug Scrubs: Bug scrubs are when we go over different aspects of the product as a team with a fine tooth comb. The key point is to always include members from various parts of the organization. A tester will typically lead a test session about a feature and allow the different parties to “bang” on it. By including testers who have not really seen the feature before, UX, and even customer support, you catch issues much faster because of their unique perspectives. A UX team member will quickly catch a color being a few shades off, and a PM will be able to rethink the flow of the product to see if there is a way a feature can be improved.
  • Daily Bug Triage: Everyday a QA member, a scrum master, and a product owner will sort through bugs reported to our backlog that day. As we prioritize and assign each issue, it prevents the backlog from becoming an overwhelming, unknown entity. Instead, we know the issues, and each scrum team can work in more manageable chunks.
  • Cross team testing: We affectionately call this process “test all the things”. It is a meeting once per sprint for QA team members to go over the new aspects of the product added most recently. When the QA organization gets large and spread out across scrum teams, it can be daunting to stay on top of all the new things being released As a result it is important to meet and actually walk through the big stories with the entire QA team.
  • QA stand-ups & planning: Our QA team is spread out across twelve scrum teams. It would be impossible to stay on top of what is going on if we didn’t have our daily standup. It is a 10-15 minute meeting where we discuss what we are working on that day and how it might impact other scrum teams. It also allows us to coordinate when someone needs additional QA help on a larger story. Our planning session is a more formalized meeting, occurring once a sprint, to organize our regression test plan, learn new testing processes, and discuss new features.

Working with each member of your team, including other testers, developers, product managers, design, and customer support/success is the key to preventing issues and working through the ones that inevitably make it to production. Testers can feel trapped and think their only sphere of influence is with other testers and maybe a few developers willing to listen. However, implementing testing ideas across the company benefits not only the overall quality of the product, but the speed with which you release new features and bug fixes. Plus, you get the added bonus of killing the “QA as a bottleneck” mentality.

Michele Campbell
Michele campbell color Michele is a QA Manager and Release Coordinator at Lucid Software in Utah, where she has been working on improving the testing process for four years. She speaks Japanese and loves visiting the country for vacations. In her free time, she enjoys playing with her pet guinea pig, baking just about anything, and going to live theater.

Getting under the Skin of a React Application - an Intro to Subcutaneous Testing
Melissa Eaden & Avalon McRae

While UI tests are often the standard approach for functional testing, they have weaknesses in speed and effectual results. Subcutaneous Testing is an automated testing technique that allows testers and developers to collaborate on automated workflows as a testable unit. This allows for testing of behavior and functionality in isolation from compatibility issues. We’ll highlight benefits, risks, and demo an example of how this can work with a react/redux front end.

Takeaways

  • Definition of subcutaneous testing and what it aims to accomplish.
  • A technical overview about how to do SubQ testing in a react framework.
  • An explanation of gotchas about the limitations of this kind of testing.
  • How SubQ testing can change your UI Testing focus.

Melissa Eaden
Melissa eaden Melissa Eaden has worked for more than a decade with tech companies and currently enjoys working for ThoughtWorks, in Dallas, Texas. Melissa’s previous career in mass media continues to lend itself to her current career endeavors. She enjoys being EditorBoss for Ministry of Testing, supporting their community mission for software testers globally. She can be found on Twitter and Slack @melthetester.
Avalon McRae
Avalon headshot Avalon McRae is a full stack software developer current working at ThoughtWorks, based out of New York, NY. She has worked in a variety of languages including JavaScript, Groovy, Java, and Kotlin, and is currently doing React/redux and Kotlin. She is passionate about automated testing (particularly TDD) and supporting and mentoring women in technology. Before ThoughtWorks, Avalon studied computer science and economics at Dartmouth College.
Micro Sponsors: