TestBash San Francisco 2019
November 6th 2019 08:00 - November 7th 2019 18:00
Risk or Fear: What Drives Your Testing?
- Discernment: Test decisions, what is your real motivator?
- Embracing the concept, “What is good enough quality?”
- Reassessing risk by integrating new data
- How to overcome bias created by fear and previous failures
Jenna CharltonJenna is a senior tester with 8 years of experience. When she's not testing she's going to pro wrestling shows and concerts with her husband Bob, serving as a deacon in her church and cuddling the 3 feline overlords that share her home.
- Problems with miscommunication? Three Amigos Collaboration
- Problems with poor planning? Example Mapping
- Problems with missed deadlines? Snowball Test Automation
- To understand the purpose and goals of BDD
- To learn three helpful practices that can help any team:
- Three Amigos collaboration to solve communication problems
- Example Mapping to solve poor planning problems
- Snowball Test Automation to solve missed deadline problems
- To adopt a pragmatic view of software development and testing processes
Andrew KnightAndy Knight is the “Automation Panda” - an engineer, consultant, and international speaker who loves all things software. He specializes in building robust test automation solutions from the ground up. He currently works at PrecisionLender in Cary, NC. Read his tech blog at AutomationPanda.com, and follow him on Twitter at @AutomationPanda.
The Poorly Titled TestOps Talk
Let's explore this. As a person with "TestOps" in my job title, I'd like to take the audience on a journey through the history of "TestOps" as a useful term to help describe the close working relationship between infrastructure teams and testers. We'll also look at how test environments themselves can be used as part of a modern tester's toolkit, and how TestOps practices can help prevent issues from arising in your production infrastructure. Throughout, I think we can examine the usage of the term "TestOps" itself - and see if it's a useful abstraction for you and your test organization.
- Is "TestOps" just a cool name, or is there something useful in defining a close alignment between Testers and Ops?
- What are the benefits of test teams taking ownership of their own environments?
- What does the day-to-day work of someone with "TestOps" in their job title look like?
Alexander LangshallAlex Langshall is a TestOps Engineer and Release Manager for Lucid Software. His day to day work involves testing architecture-heavy features and minimizing the risk of weekly regular deploys. Alex works remotely from the Portland, Oregon area where he lives with his spouse, kiddo, and cat.
Your Brain on Usability: UX for QAs
"The user" comes up frequently in testing -- understanding your users, their workflows, and ensuring users have a positive experience with your product is a critical aspect of testing. We ensure our products are easy to use and can handle invalid user inputs, however, many testers don't understand the most important aspect of the user -- their brain and how it works.
My talk introduces testers to cognitive psychology, and establishes how gaining a better understanding of how users retain information, complete tasks, and process visual input can improve their testing.
- Gain a basic understanding of cognitive psychology, and why understanding cognitive psychology is critical to thinking like a user.
- Gain an understanding of key cognitive psychology principles that impact product usability and how cognitive psychology and usability research has been conducted.
- Learn methods of testing that focus on product usability and accessibility, how to spot common usability concerns, and understand why those usability issues are largely universal across user personas.
- Leave with resources to further dive into usability concepts.
Jessica VersawJessica's career has always centered around users -- she's worked in customer support, sales, and as a Quality Engineer. Currently, she's a Product Designer at Quantum Workplace and working on her Master's in Human Computer Interaction at Iowa State. She has a strong interest in cognitive psychology and how it can be used to create intuitive design, and research makes her downright giddy. When she's not designing, she enjoys being outdoors, hunting down the perfect mid-century antique (ask me about Broyhill Brasilia, because I'll definitely talk about it), spinning, listening to practically any true crime podcast, and taking way too many photos of her baby and pets. She also enjoys volunteering with organizations that support women in technology, and co-founded a Girls Who Code chapter.
Cultivating Your Tester Ecosystem: Growing a QA Department from the Ground Up
- Strategies to enact the change you want to see in your company
- A push to start critically thinking about testing tasks/methodologies in place at your current company to ensure they are the best solutions for your team
- Change won't always work perfectly right away! If at first you don't succeed, don't dwell on your mistakes but learn and grow from them!
- Tools to recognize when something isn't working and knowing how to move forward productively.
- Even starting with something small, you can start on the path to a better culture of quality in your company.
Matthew RecordMatthew Record is a Software Test Engineer at Johnson Health Tech, a worldwide leader in fitness solutions. Record was the first software tester hired at Johnson and has had the unique experience of being a pivotal piece in the formation and growth of the testing group in his company today. With over five years of experience advocating for testers and implementing change, he is excited for the opportunity to share the lessons he has learned throughout his testing career with the TestBash community.
Automation Yoga: Stretching ROI with a Testing SDK
- Common pitfalls in automation frameworks
- Automation is more than writing tests
- Enable and Instrument Exploratory Testing
- SDK's provide more options for contribution
Brendan ConnollyBrendan Connolly is an experienced Software Tester, Developer and blogger. Currently he is a Senior Quality Engineer at Procore Technologies in Santa Barbara, California. He's written tests at all levels from unit and integration tests to API and UI tests and is responsible for creating and executing testing strategies while using his coding powers for developing tooling to help make testers lives easier.
A Beginner’s Guide to Test Automation
- What’s important to consider - and what isn’t - when you’re choosing a tool or framework
- How to decide which tests to automate, and why
- Best practices for actually writing the tests, like separation of concerns and useful failure messages
- Understanding that the goal isn’t “automate everything”, but rather to automate the repetitive checks so you can work on testing higher-risk items
- Learning how to create a good test structure, such as granularity, independent tests, useful failure messages
- How to collaborate with people on your team to get their support for the time and effort of implementing automated testing
Angela RiggsAs a QA engineer, Angela’s work has ranged from feature testing to leading department-wide process changes. She believes that empathy and curiosity are driving forces of quality, and uses both to advocate for users and engineering teams. Outside of work, she enjoys exploring the aisles of Powell’s and the forests of the PNW. She has an enthusiasm for karaoke, and serious debates about what can truly be categorized as a sandwich.
Stacking The Automation Deck
- Layered architectures give us options on how to use them
- They can be used in different frameworks
- They can be used for non-traditional automation
- Appropriate stewardship is required
- Appropriate logging and error message are critical
Paul GrizzaffiAs a Principal Automation Architect at Magenic, Paul Grizzaffi is following his passion of providing technology solutions to testing and QA organizations, including automation assessments, implementations, and through activities benefiting the broader testing community. An accomplished keynote speaker and writer, Paul has spoken at both local and national conferences and meetings. He is an advisor to Software Test Professionals and STPCon, as well as a member of the Industry Advisory Board of the Advanced Research Center for Software Testing and Quality Assurance (STQA) at UT Dallas where he is a frequent guest lecturer. Paul enjoys sharing his experiences and learning from other testing professionals; his mostly cogent thoughts can be read on his blog at https://responsibleautomation.wordpress.com/.
The "Do Nots" of Testing
- I'll present the top five “do nots” that testers have introduced in to the industry.
- We will discuss these items in detail
- why they were introduced
- some of the mis-perceptions they have propagated.
- We will then discuss what to replace those “do nots” with and how those suggestions allow for a more innovative approach to the industry.
Melissa TondiMelissa Tondi has spent most of her career working within software testing teams. She is the founder of Denver Mobile and Quality (DMAQ), past president and board member of Software Quality Association of Denver (SQuAD), and Senior QA Strategist at Rainforest QA, where she assists companies to continuously improve the pursuit of quality software—from design to delivery and everything in between. In her software test and quality engineering careers, Melissa has focused on building and organizing teams around three major tenets—efficiency, innovation, and culture – and uses the Greatest Common Denominator (GCD) approach for determining ways in which team members can assess, implement and report on day to day activities so the gap between need and value is as small as possible.
Growing Test Managers
Eric ProeglerEric Proegler has worked in testing for 20 years. He is a Director of Test Engineering for Medidata Solutions in San Francisco, California. Eric is the President of the Association for Software Testing. He is also the lead organizer for WOPR, the Workshop on Performance and Reliability. He’s presented and facilitated at CAST, Agile2015, Jenkins World, STARWEST, Oredev, STPCon, PNSQC, WOPR, and STiFS. In his free time, Eric spends time with family, runs a science fiction book club, and sees a lot of live literary events, music, and stand-up comedy. He also seeks out street food from all over, plays video games, and follows professional basketball.
This is What a Tester Looks Like
Charlene Granadosin & Charlotte Bersamin
- How to overcome Bias in Testing
- How to convince people to think like a tester
- Help others realize that we are an integral part of the development process
- Acknowledge and be aware of the Halo Effect
- Psychology in Testing is a real thing
Charlene GranadosinWith around 8 years of QA experience, Charlene used to be obsessed with having the highest bug count in the company until she realized that quality and the product, should not be defined by the amount of bugs testers catch. Since then, she's been working to bridge the information gap in terms of quality by fostering a collaborative environment between testers and developers. She believes that testers are more than bug catchers and encourages her team to explore new technologies and solutions in automation and continuous integration.
Charlotte BersaminCharlotte is a passionate automation engineer, focused on mobile automation. She is also an avid reader, certified chef, and Polynesian dancer! With over five years of software quality experience, she has developed her talents through insights provided to her in the industries she's worked, from health, banking, and now to Sports Media at Bleacher Report. She shares her love for software testing through mentorships and hopes to inspire others to think like a tester and question everything.
Integrating AI Into Your Tests
In this session Jennifer Bonine will explore new shifts in testing paradigms. Demonstrate an AI first testing method that integrates with your current manual and automation testing, and understand AI that aids your app teams. Re-think where you want to spend time and money in your testing team in a challenge that plagues most companies of too much to test and too little time. This will re-position the testing and quality organizations to not be the last part of what happens, but to providing valuable insights and actionable data for your C-Suite to drive business decisions.
- Ideas to reshape your test strategies
- An understanding of AI solutioning and where to begin implementing
- Analysis of available tooling options in the AI space. (Vendor agnostic)
Jennifer is experienced speaking at both international & US engagements. She has keynoted Testing and Agile Development conferences. You can see her at Google, Agile and Testing conferences. Jennifer is the CEO of PinkLion AI, a breakout AI company that brings AI to the world's App teams and delivers AI integration with a human engagement model while educating teams on solving challenges with an AI first Jennifer began her career in consulting, implementing large ERP solutions. She brings with her the unique industry perspective of having been on the inside of many of the brand name companies all of us interact with in the entertainment, media, and retail industries among others. Jennifer believes strongly that we should what we do what we are passionate about in life and believes in living your passion. She has held executive level positions leading development and quality engineering teams for Fortune 100 companies in several industries. In a recent engagement, Jennifer served as a strategy executive and in corporate marketing for the C-Suite. She enjoys the challenges of always having new problems to solve and collaborating with new clients worldwide.
Test Machina: Demystifying AI-Driven Test Automation
Software vendors and practitioners are using artificial intelligence (AI) and machine learning (ML) to create a new wave of test automation tools. Such tools leverage autonomous and intelligent agents to explore, model, reason and learn about a software product. But how do these testing robots really work? Is this technology any good? And can we really trust it to validate software? Tariq King will introduce you to the world of AI-driven test automation and discuss its benefits, challenges and other limitations. Learn how test bots use AI/ML technologies to mimic human testing activities such as discovering the application, generating test inputs, and verifying expectations. Come and experience the test bots in action through a demonstration of open-source AI-driven test automation prototypes.
Tariq KingTariq King is the Head of Quality at Ultimate Software. With over fifteen years' experience in software testing research and practice, Tariq leads a team of directors, architects, and engineers responsible for guidance, strategy, innovation and outreach in software quality and performance engineering. His areas of research interest include software testing, artificial intelligence, autonomic and cloud computing, model-driven engineering, and computer science education. Tariq has published over 40 research articles in peer-reviewed IEEE and ACM journals, conferences, and workshops, and has been a keynote and invited speaker at international software conferences in industry and academia. He is the co-founder of the Artificial Intelligence for Software Testing Association. Contact Tariq via LinkedIn or Twitter.
Observability: Unlearn Guessing, Reduce Stressing and Learn to Embrace Reality
For the last 5 years, I’ve been lucky enough to work on bleeding edge software initiatives using things like microservices, containerisation, cloud-based platforms and CICD to deliver more value more quickly to our customers.
So, how can testers keep pace in these incredibly fast-paced environments while trying to test these highly volatile, complex distributed systems?
This scary new world presents an entirely new set of challenges and risks for teams and consequently requires a whole team approach to testing and development.
We as testers need to unlearn our old ideas about testing and learn to accept that we can’t predict system behaviour and that failure is inevitable. The truth is our job is no longer finished when we deploy to production it’s just beginning.
In the face of this reality, we need to focus not just on prevention but also on detection and recovery so that our teams can move quickly and safely with justifiable confidence.
Teams need to build observability into their systems from the start so that can quickly detect important problems, isolate the cause and remediate the issue with minimal impact to the customer.
In this session I’ll talk about my observability journey and the lessons we’ve learned. I’ll discuss how I’ve used mapping exercises, models and workshops to help development teams embrace reality and build observability into both their software systems and the way they work.
Rob Meaney is a tester that loves tough testing and software delivery problems. He works with teams to help create products that customers love and know they can rely upon. Although he enjoys learning about software delivery, in general, he’s particularly interested in Quality Engineering, Test Coaching, Testability, and Testing in Production.
Currently, he’s working as Head of Testing & Test Coach for Poppulo in Cork, Ireland. He’s a regular conference speaker, an active member of the online testing community and co-founder of Ministry of Test Cork.
Previously he has held positions as Test Manager, Automation Architect and Test Engineer with companies of varying sizes, from large multinationals like Intel, Ericsson & EMC to early-stage startups like Trustev. He has worked in diverse areas from highly regulated industries like safety automation & fraud detection to dynamic, exciting industries like gaming.
Devs Should Love Your Tests
My journey at Jane started with them bringing me in as their first Automation Engineer and I was tasked with writing all of their automated tests. However, after my evaluation of the organization's maturity level, current status, and actual expectations, I quickly learned that writing test automation wasn't exactly what they needed at that time.
Their processes and culture were in flux with many devs and product owners wanting to keep things with a "startup" feel that really meant "let me do what I want". There were no QA standards or practices, testability was a foreign word, and it seemed like no one understood how to write automated tests. However, everyone seemed to agree that they _wanted_ automated tests and had started and failed multiple times.
This story probably sounds familiar to many of you because it's such a common pattern that I see in many companies. Every time I'm brought in to do some training, consulting, presenting, etc., the same questions or comments get brought up every time:
- The QA team at the company has no idea how to implement the things they want.
- Devs own the automated testing and QA owns the UI tests, but p0 bugs get out anyway.
- We want to own test automation but don't know how to or where to start.
In the end, there is always an "Us versus Them" mentality. Sometimes it's even QA Engineers versus Automation Engineers! Regardless, this is one of the biggest reasons why companies fail at these Agile/DevOps transformations.
I want to show what I did for my current company. I want to walk through my strategy, our structure, how we integrate QA successfully in an Agile environment, Test Infrastructure and Automation, and how we are at the point where testing is top-of-mind for every member of the team. I want to talk about our transformation where leadership is excited and invested in QA and why our devs love our automated tests and are 100% bought in to testing.
Carlos Kidman is the QA Manager at Jane.com, but who would have thought that Magic the Gathering would introduce him to QA in the first place? Now it’s an integral part of his life. He started in QA Engineering, but quickly moved into Test Automation and grew to appreciate each player and role in the development game. He wants to share the love and joy he’s found with QA and believes that a rising tide raises all ships. Carlos specializes in creating Test Automation Frameworks for UI, Integration, and Service tests, scaling tests and empowering developers and testers with CI/CD tools like Jenkins, Docker, and Kubernetes, and works closely with Infrastructure and DevOps organizations.
Although he is currently a QA Manager at Jane.com, he is also very active in the community. He is the founder of QA at the Point and QA Utah and is also a board member of DevOpsDays.
From Rags to Riches: Turning Your Test Automation Into a Cinderella Story
In the era of DevOps and continuous deployment, more and more organisations are demanding a move from lengthy release cycles to shorter deployments - occurring weekly and sometimes even daily. To accomplish this, test automation is not only required, but is now an integral piece of the continuous integration pipeline. This is a vast contrast from how test automation was viewed not very long ago. In the past, teams treated test automation as a side project - a stepchild like Cinderella. But with its newly discovered importance, test automation is now the “belle of the ball”.
So, how does this change how we develop test automation? In this talk, Niranjani will share her experiences driving test automation from rags to riches at companies such as Lyft and Pinterest. She’ll discuss the practices of building a team and culture to support test automation, as well as failures and mishaps that they endured. She’ll also share the lessons learned of how to prepare tests and infrastructure for this new and richer lifestyle of being a part of CI/CD.
Join Niranjani on this magical journey to transform your test automation from rags to riches. You’ll learn how to dress up your test automation with design patterns that improve CI efficiency, and embark on a whimsical coach ride by wrapping your tests in containers to simplify your build process.
Niranjani is an enthusiastic engineer passionate about writing code to break applications! She has worked at both startups and well established companies like eBay, Twitter and Pinterest, balancing the challenges at both environments.
She strives to strike a balance between being a workaholic and a wanna-be-traveler, who is openminded but still likes to believe unicorns are real!
Accessibility Testing 101
As more companies come to understand the importance and necessity of building accessibility into their applications, it’s critical for testers to implement testing that truly captures issues that can inhibit access and usability for users with disabilities and those without. This session is for those new to accessibility testing and will cover basic testing techniques that can be applied to most applications. It will also help attendees to understand the basics of web/mobile accessibility and discover tools and features to help them in their testing.
- Learn the basics of digital accessibility
- Explore techniques and best practices for accessibility testing
- Overview of manual and automated accessibility testing tools
Crystal Preston-Watson is a QA Engineer at Spruce Labs in Denver, Colorado. With a decade working with tech companies in U.S and Canada, she has developed a passion for building quality and inclusive products. Over the years Crystal has worked as an accessibility engineer supporting and advising engineer teams, front-end developer in fintech, and interactive producer at a daily newspaper. Outside of work, you can find her playing backseat detective while watching old episodes of Forensic Files, photographing her two cats, Shadowmere and Ms. Etta James and getting really weird with life.
Food Truck Party [Wednesday]
Angie Jones & Ash Coleman
After day 1 of TestBash San Francisco, conference ticket holders will be treated to a Food Truck Party right outside the conference venue! We'll have 4 food trucks from all around the Bay area serving up some wonderful food and beverages. This is included in your ticket, it won't cost you extra.
Sit back with other attendees and speakers. Soak up the views and enjoy some great food while we reflect on a fantastic day and anticipate what the next day of talks will bring. Bring warm clothing as this is right by the water.
Simply respond "Yes" to the Food Truck Party question once you have completed your ticket purchase.
We'll cater to dietary requirements too!
Please note that this is only available to people who have purchased a ticket to the conference.
Angie JonesAngie Jones is a Senior Developer Advocate who specializes in test automation strategies and techniques. She shares her wealth of knowledge by speaking and teaching at software conferences all over the world, as well as writing tutorials and blogs on angiejones.tech. As a Master Inventor, Angie is known for her innovative and out-of-the-box thinking style which has resulted in more than 25 patented inventions in the US and China. In her spare time, Angie volunteers with Black Girls Code to teach coding workshops to young girls in an effort to attract more women and minorities to tech.
Ash, a former chef, put recipes aside when she began her career in software development, falling back on her skills in engineering she acquired as a kid building computers with her brother. A progressive type, Ash has focused her efforts within technology on bringing awareness to the inclusion of women and people of colour, especially in the Context-Driven Testing and Agile communities. An avid fan of matching business needs with technological solutions, you can find her doing her best work on whichever coast is the sunniest. Having helped teams build out testing practices, formulate Agile processes and redefine culture, she now works as an Engineering Manager in Quality for Credit Karma and continues consulting based out of San Francisco.
When a Hot-fix Isn't a Hot-fix
The term hot-fix can sometimes be used incorrectly by teams that want to release a feature or small fix out of the regular release cycle. When out of cycle releases happen, it risks issues being introduced due to changes not being properly tested or impact in areas that are not anticipated, which is why any out of cycle release should be done with extreme caution.
With every project team having their own feature priorities and deadlines, the challenge is making an unbiased decision for whether an out of cycle release should happen based on data, not goal completion, with little to no user impact.
In the talk I’ll discuss how teams can take back the term “hot-fix” by measuring quantitative vs. qualitative data, moving away from a release cadences based on feature readiness to a set release cycles. This discussion will include how to navigate those teams trying to release their features to meet their own goals and deadlines, the process of finding a solution to consistent out of cycle releases, as well as the process of creating a scalable unbiased solution.