TestBash Detroit 2020
Ministry of Testing has partnered with the wonderful Hilary Weaver-Robb to bring you a lineup that is sure to bring learning opportunities for people at all levels in their careers.
We’re opening the week with the 3-day Automation in Testing course with Richard Bradshaw and Mark Winteringham. We follow that with 3 full-day workshops and conclude the week with our beloved single track conference day, TestBash, where we’ll have nine thought-provoking talks and our Community Space for added learning opportunities in the breaks. Workshops will take place in the Marriott. TestBash will be in St Andrews Hall.
Pro Ministry of Testing members get an additional $75 off the workshops and conference day for TestBash Detroit! Not Pro? Sign up today and save on your TestBash tickets, but also get access to every past TestBash talk, online courses and a whole host more.
Monday, 20th April 2020
Automation in Testing - 3 Day Course - 20th - 22nd April 2020
Mark Winteringham & Richard Bradshaw
What Do We Mean By ‘Automation in Testing’?
Automation in Testing is a new namespace designed by Richard Bradshaw and Mark Winteringham. The use of automation within testing is changing, and in our opinion, existing terminology such as Test Automation is tarnished and no longer fit for purpose. So instead of having lengthy discussions about what Test Automation is, we’ve created our own namespace which provides a holistic experienced view on how you can and should be utilising automation in your testing.
Why You Should Take This Course
Automation is everywhere, it’s popularity and uptake has rocketed in recent years and it’s showing little sign of slowing down. So in order to remain relevant, you need to know how to code, right? No. While knowing how to code is a great tool in your toolbelt, there is far more to automation than writing code.
Automation doesn’t tell you:
- what tests you should create
- what data your tests require
- what layer in your application you should write them at
- what language or framework to use
- if your testability is good enough
- if it’s helping you solve your testing problems
It’s down to you to answer those questions and make those decisions. Answering those questions is significantly harder than writing the code. Yet our industry is pushing people straight into code and bypassing the theory. We hope to address that with this course by focusing on the theory that will give you a foundation of knowledge to master automation.
This is an intensive three-day course where we are going to use our sample product and go on an automation journey. This product already has some automated tests, it already has some tools designed to help test it. Throughout the three days we are going explore the tests, why those tests exist, our decision behind the tools we chose to implement them in, why that design and why those assertions. Then there are tools, we'll show you how to expand your thinking and strategy beyond automated tests to identify tools that can support other testing activities. As a group, we will then add more automation to the project exploring the why, where, when, who, what and how of each piece we add.
What You Will Learn On This Course
To maximise our face to face time, we’ve created some online content to set the foundation for the class, allowing us to hit the ground running with some example scenarios.
After completing the online courses attendees will be able to:
- Describe and explain some key concepts/terminology associated with programming
- Interpret and explain real code examples
- Design pseudocode for a potential automated test
- Develop a basic understanding of programming languages relevant to the AiT course
- Explain the basic functionality of a test framework
The first half of day one is all about the current state of automation, why AiT is important and discussing all the skills required to succeed with automation in the context of testing.
The second half of the day will be spent exploring our test product along with all its automation and openly discussing our choices. Reversing the decisions we’ve made to understand why we implemented those tests and built those tools.
By the end of day one, attendees will be able to:
- Survey and dissect the current state of automation usage in the industry
- Compare their companies usage of automation to other attendees
- Describe the principles of Automation in Testing
- Describe the difference between checking and testing
- Recognize and elaborate on all the skills required to succeed with automation
- Model the ideal automation specialist
- Dissect existing automated checks to determine their purpose and intentions
- Show the value of automated checking
The first half of day two will continue with our focus on automated checking. We are going to explore what it takes to design and implement reliable focused automated checks. We’ll do this at many interfaces of the applications.
The second half of the day focuses on the techniques and skills a toolsmith employs. Building tools to support all types of testing is at the heart of AiT. We’re going to explore how to spot opportunities for tools, and how the skills required to build tools are nearly identical to building automated checks.
By the end of day two, attendees will be able to:
- Differentiate between human testing and an automated check, and teach it to others
- Describe the anatomy of an automated check
- Be able to model an application to determine the best interface to create an automated check at
- How to discover new libraries and frameworks to assists us with our automated checking
- Discover opportunities to design automation to assist testing
- An appreciation that techniques and tools like CI, virtualisation, stubbing, data management, state management, bash scripts and more are within reach of all testers
- Propose potential tools for their current testing contexts
We’ll start day three by concluding our exploration of toolsmithing. Creating some new tools for the test app and discussing the potential for tools in the attendee's companies. The middle part of day three will be spent talking about how to talk about automation.
It’s commonly said that testers aren’t very good at talking about testing, well the same is true about automation. We need to change this.
By the end of day three, attendees will be able to:
- Justify the need for tooling beyond automated checks, and convince others
- Design and implement some custom tools
- Debate the use of automation in modern testing
- Devise and coherently explain an AIT strategy
What You Will Need To Bring
Please bring a laptop, OS X, Linux or Windows with all the prerequisites installed that will be sent to you.
Is This Course For You?
Are you currently working in automation?
If yes, we believe this course will provide you with numerous new ways to think and talk about automation, allowing you to maximise your skills in the workplace.
If no, this course will show you that the majority of skill in automation is about risk identification, strategy and test design, and you can add a lot of value to automation efforts within testing.
I don’t have any programming skills, should I attend?
Yes. The online courses will be made available several months before the class, allowing you to establish a foundation ready for the face to face class. Then full support will be available from us and other attendees during the class.
I don’t work in the web space, should I attend?
The majority of the tooling we will use and demo is web-based, however, AiT is a mindset, so we believe you will benefit from attending the class and learning a theory to apply to any product/language.
I’m a manager who is interested in strategy but not programming, should I attend?
Yes, one of core drivers to educate others in identifying and strategizing problems before automating them. We will offer techniques and teach you skills to become better at analysing your context and using that information to build a plan towards successful automation.
What languages and tools will we be using?
The current setup is using Java and JS. Importantly though, we focus more on the thinking then the implementation, so while we’ll be reading and writing code, the languages are just a vehicle for the context of the class.
I am a tester, coach, mentor, teacher and international speaker, presenting workshops and talks on technical testing techniques. I’ve worked on award winning projects across a wide variety of technology sectors ranging from broadcast, digital, financial and public sector working with various Web, mobile and desktop technologies.
I’m an expert in technical testing and test automation and a passionate advocate of risk-based automation and automation in testing practices which I regularly blog about at mwtestconsultancy.co.uk and the co-founder of the Software Testing Clinic. in London, a regular workshop for new and junior testers to receive free mentoring and lessons in software testing. I also have a keen interest in various technologies, developing new apps and Internet of thing devices regularly. You can get in touch with me on twitter: @2bittester
Richard BradshawRichard Bradshaw is an experienced tester, consultant and generally a friendly guy. He shares his passion for testing through consulting, training and giving presentation on a variety of topics related to testing. He is a fan of automation that supports testing. With over 10 years testing experience, he has a lot of insights into the world of testing and software development. Richard is a very active member of the testing community, and is currently the FriendlyBoss at The Ministry of Testing. Richard blogs at thefriendlytester.co.uk and tweets as @FriendlyTester. He is also the creator of the YouTube channel, Whiteboard Testing.
Thursday, 23rd April 2020
All Day Sessions | 9:00am - 5:30pm
“How should we test this?” is one of the toughest mysteries a practicing tester can encounter. To answer this question we need to consider our context and devise a workable strategy. This is a skill that is seldom taught and much of the related literature is weak and uninspiring. To develop this important skill, this workshop is devised where we will sharpen your strategic thinking skills to enable you to create a baseline test strategy for a product in time it takes to drink a cup of tea.
During this interactive workshop we will work in groups to create heuristics that will work as a mind palace: resources that enable us to remember and think fast. We will also work on fast context analysis and modelling your context. Equipped with these tools, the groups will create test strategies for a series of project contexts. We will present and debrief our strategies, think critically about the presented strategies and collectively improve our test strategy skills. This is going to be fast-paced and a lot of fun.
Don’t worry if you think you’ve never created a test strategy before or that you can’t create a solid test strategy in the time it takes to drink a cup of tea. It’s not magic, you know. We’ll start nice and easy and allow you more time to work on strategies but as the workshop progresses and your skills develop, we’ll increase the complexity of the context and shorten the timescales. By the end of the session you’ll feel like Sherlock Holmes, ready to tackle any project context in the blink of an eye!
- Learn about heuristic to use to create test strategies
- Learn how to create a test strategy fast
- Improve you thinking skills
Huib SchootsHuib Schoots is a coach, consultant, tester and people lover. He shares his passion for software development and testing through coaching, training, and giving presentations on a variety of agile and test subjects. Huib believes that working together in the workplace ultimately makes the difference in software development. He, therefore, helps people in teams to do what they are good at: by empowering them and help them continuously improve teamwork. Curious and passionate, he is an agile coach and an exploratory and context-driven tester who attempts to read everything ever published on software development, testing and agile. Huib maintains a blog on magnifiant.com and tweets as @huibschoots. He works for de Agile Testers: an awesome place where passionate and truly agile people try to make the world a better place. He has a huge passion for music and plays trombone in a brass band.
The Whole Team Approach to Continuous Delivery
Lisa Crispin & Ashley Hunsberger
In this hands-on workshop, participants will have a chance to practice techniques that can help teams feel confident releasing more frequently. You’ll practice using frameworks and conversation starters together with your team to discuss what questions each step in your delivery pipeline needs to answer, and to understand the value each step provides. All materials used are freely available online so participants can try them with their own teams.
You’ll learn the language of DevOps so you can collaborate with all delivery team members to grow your DevOps culture and infrastructure. You'll work in small groups to come up with new experiments to overcome problems like how to complete manual testing activities and still do CD, how to shorten feedback cycles, how make sure all essential types of testing are done continually, and how to fit testing into the continuous world by engaging the whole team. You’ll learn that there IS a “test” in “DevOps”.
Whether your tests take minutes or days, and whether your deploys happen hourly or quarterly, you’ll discover benefits. The tutorial will include an overview of techniques for "testing in production". You’ll participate in a simulation to visualize your team’s current path to production and uncover risks to both your product and your deployment process. No laptops required, just bring your curiosity.
- Continuous delivery concepts at a high level, and the differences between continuous integration and continuous delivery
- Common terminology and a generic question list to engage with pipelines as a practice within your team
- How to use the Test Suite Canvas to design a pipeline that gives your team confidence to release frequently
- Experience in analyzing pipelines from different perspectives to create a layered diagram of feedback loops, risks mitigated, and questions answered
- Ways your team can design experiments to address the many challenges of testing in a continuous world
Lisa Crispin is the co-author, with Janet Gregory, of Agile Testing Condensed , More Agile Testing: Learning Journeys for the Whole Team (2014), Agile Testing: A Practical Guide for Testers and Agile Teams (2009), the LiveLessons Agile Testing Essentials video course, and “The Whole Team Approach to Agile Testing” 3-day training course. She co-authored Extreme Testing (2002) with Tip House. She is a contributor to Experiences of Test Automation by Dorothy Graham and Mark Fewster (Addison-Wesley, 2011), Beautiful Testing (O’Reilly, 2009) and other books. Lisa was voted by her peers as the Most Influential Agile Testing Professional Person at Agile Testing Days in 2012. She enjoys helping people find ways to build more quality into their software products, as well as hands-on testing. Please visit www.lisacrispin.com and www.agiletester.ca for more.
Ashley is the Director of Release Engineering at Blackboard, Inc. a leading provider of educational technology, where she leads efforts to enable teams throughout the organisation get to production as fast as possible, with as high a quality as possible. She shares her experiences in testing and engineering productivity through writing and speaking around the world.
A proponent of open source, Ashley believes in giving back to the software community and serves as a member of the Selenium Project Steering Committee and co-chair of the Selenium Conference.
Moving from SQA to SQE: Introduction to API Testing
Hemory Phifer & Lee Caldwell
Many of today’s applications are shifting towards using APIs to help facilitate communication of data. Whether it is being used to communicate with micro-services internally or to integrate with external clients, leveraging API’s are becoming increasingly prevalent in today’s tech-forward companies. As quality champions, it is important that we have a baseline knowledge of what API’s are, how they work, and how to test them.
Moving from Software Quality Analyst (SQA) to Software Quality Engineer (SQE): Introduction to API Testing is a hands-on workshop focused on testing API’s utilizing Postman and C#.
Basic knowledge of C#, testing, and API’s is encouraged but not necessary. This workshop is designed to be foundational.
- Explain what an API is, and identify the four main HTTP request methods: GET, POST, PUT, DELETE
- Explain the benefits of API testing and best practices
- Demonstrate the features of Postman as a tool (collections, environments, test, debugging)
- Utilize Postman’s scripting feature to automate testing and workflow
- Build tests to address response code, functionality, performance, and security
- Compose some basic API tests in C#
Hemory PhiferHemory Phifer is a Senior IT Trainer at United Shore where he is afforded the opportunity to train the next generation of developers. He himself a Software Developer turned People Developer, transitioned from developing software at Quicken Loans to spearheading an in-house training initiative that trained team members to become software developers. Passionate about growing in technology and growing people, Hemory has focused his talents into bringing technology training to the community through initiatives like ExperienceIT, a free software development boot camp in partnership with several technology companies in downtown Detroit. Hemory also co-founded DevYou with fellow co-founder Leonidas Caldwell, a start-up dedicated to bringing foundational programming knowledge through Java and C#. As one who deems himself “In permanent beta”; Hemory is dedicated to personal growth and cultivating others.
Lee CaldwellLee Caldwell is a Software Development team leader and scrum master at Amrock. Previously, worked as a Software Quality Engineer during the company’s focused efforts to transition from supporting Software Quality Analysts to supporting Software Quality Engineers; Lee embraced and helped drive the SQE culture during the company’s transition. Recently Lee combined his passion for teaching with his passion for software development to co-found DevYou, a start-up that focuses on teaching introductory software development skills. Having a background in education Lee brings an engaging twist and unique style to teaching software development concepts.
Friday, 24th April 2020
The Case of the Tenacious Tester: How Using Your Words Improves Your Work
- Using a narrative style in note-taking makes it easier to produce test artifacts and reproduce bugs
- In a world of Slack and email, thoughtful writing and storytelling improve team dynamics and communication about bugs and defects
- Writing effectively and elegantly allows you to communicate directly with clients and product owners in a way that enhances the status of QA on a project
Katrina OhlemacherKatrina Ohlemacher is a traveler, a writer, a procrastinator, and any number of other nouns. Hers is a tale as old as time: After working at newspapers and nuclear power plants, she made the leap into Quality Assurance by quitting her job and attending a boot camp with no back-up plan. But as Hunter S. Thompson says, God watches out for fools and sportswriters, and she eventually landed at Detroit Labs. She would like to note that the Oxford comma has been added to this bio over her vehement protests.
The Only Good Quality Metric is Morale
- Pitfalls of commonly used metrics: no. bugs found, production defects, time to resolution...
- Morale as a meaningful metric: studies have shown high preforming teams are teams with high morale/psychological safety
- Measuring morale in significant ways: surveys, team discussions, retros
- Increasing morale to increase quality
Jenny came up through support and DevOps, cutting her teeth on that interesting role that acts as the 'translator' between customer requests from support and the development team. Her love of support and the human side of problems lets her find a sweet spot between empathy for the user and empathy for my team.
She's done testing, support, or human interfacing for most of her career. She finds herself happiest when she's making an impact on other people--whether it's helping find issues in applications, leading scrum, speaking at events, or just grabbing a coffee and chatting.
How I Became a Bonafide Full-time Quality Champion and How You Can Become One as Well
Come hear my story about how my journey from tester, to influencer made me an effective quality champion. I will present methods to market and implement quality I've used to influence friendly and hostile people alike, push them into the quality mindset, and present ideas that sparked the growth of individual and team-wide initiatives. If you ever thought of what the journey from tester to quality champion looked like, or if that snazzy title really means anything, come check out this talk.
- A series of examples of how empowering other team members through workshops and appropriate planning allowed me to focus on coaching and driving strategic initiatives.
- A Template for an effective test strategy, to be executed by developers, business analysts or team leaders.
Maciek KonkolowiczMaciek has been a quality champion his entire professional life. For many years, he’s been focusing on learning, implementing, showing and spreading the idea of quality championship to whoever he can corner, be it Dev, QA, BA, or even Project Managers. He’s a passionate technologist who loves to externalize his thoughts to gain perspectives of others. He has spoken at local meetups and conferences and loves to share his passion for the quality crusade.
How to Lie with Test Automation Metrics
I'm a senior software engineer for a large publications crosswords and games group. In this diabolical anti-talk, I'll reveal the time-worn nasty secrets handed down through generations of test and QA professionals. Automation engineers love measuring things like code coverage, browser coverage, build time, and pass/fail rates for one very simple reason.
These stats can be cheated.
Ever the bad influence, I'll lead a seminar in how to use these metrics to drive a practice that is solely for the purpose of beefing up those metrics. Real software testing, they'll reason, is not about limiting rework or ensuring confidence in a team's software delivery process. It's about those numbers. It's about validating the existence of the practice itself. More than anything else, it's about looking out for number one.
Then, at the end, I'll talk about a couple of metrics that are actually fine and how we can measure those.
- How to maximize test coverage without doing a lot of actual testing
- How to ignore which platforms your users are one and test every possible browser/OS/device config available, damn the costs.
- How to keep a lot of quick, stable tests in your test suite by ignoring the fact that they don't provide any confidence in your product releases.
- The Secret of the Eternal Janitor: If there are always flaky tests to fix, they'll always need someone on the team who can fix flaky tests.
- But seriously: What are some actual non-satirical good metrics to use and how can we track them?
Phil WellsPhil Wells has been a software quality practitioner for over a decade. Now, Phil is a senior software engineer with the New York Times crosswords and games team. This team maintains the most popular crossword product in the world. Phil works to ensure that this team builds quality into every new feature and game they deliver. Phil likes to go beyond writing tests and building infrastructure for delivery. He also acts as a coach for his peers in web development, teaching and advocating for modern test practices and technologies. People have all sorts of funny ideas about what Phil does every day. Phil does not construct the puzzle content for the crosswords. Phil does not program an AI to solve crosswords, although that would be awesome. Phil does not know Will Shortz. If you see Phil walking around the conference, feel free to say, "Hi, Phil!"
Accepting Compliments and Other Acts of Bravery
- You can learn to be as kind to yourself as you are to others.
- Empathy is part of leadership.
- Lifting others up lifts you up, too.
Reverse-engineer Your Way to Adopting a Risk-based Testing Approach
Nishi Grover Garg
Nishi Grover Garg
But before I could think about adopting this approach into our test planning, I had a challenge at hand--to convince my team. I would like to share how I convinced my team about it by using their own case study, using our previous sprint’s data, defect counts based on user stories and calculating risk priority numbers. You too can reverse engineer your way to adopting a simple, no-frills risk-based testing approach!
- Analyzing a team’s sprint history in terms of risk
- Calculating Risk Priority Number (RPN) and the defining Extent of Testing
- Finding risk areas and Re-focussing testing effort on high-risk areas
- A simplistic, no-frills approach to Risk-based testing
Nishi Grover GargNishi is a corporate trainer, an agile enthusiast and a tester at heart! With 11+ years of industry experience, she currently works with Sahi Pro software as an Evangelist and Trainings Head. She is passionate about training, organizing testing community events and meetups, and has been a speaker at numerous testing events and conferences. Nishi is also a writer on technical topics of interest in the industry and has numerous articles published at numerous popular forums and her own blog https://testwithnishi.com/ where she writes about the latest topics in Agile and Testing domains. Please connect with her on Twitter (testwithnishi) and Linkedin - https://www.linkedin.com/in/nishi-g-02127aa/
Evil User Stories - Improve Your Application Security
The idea of evil user stories is simple: First, identify important data and assets in the application you are protecting. Then, identify threat scenarios by completing the sentence "An attacker should not be able to...".
You can use evil user stories in development by putting them in the backlog and adding mitigations as acceptance criteria. This helps in implementing security together with functionality. In addition, they are a good starting point for test planning and getting testers involved in design.
You will learn to create evil user stories from different attacker perspectives and will be able to make security efforts visible in the backlog which is a step closer to building security in.
- How to create evil user stories to find potential threats on the system you are protecting
- Evil user stories make security work visible on the backlog and security features get implemented alongside functionality
- Evil user stories can be used as test planning aid
- Different methods of finding attacker perspectives
Anne OikarinenAnne Oikarinen is a Senior Security Consultant who works with security and software development teams to help them design and develop secure software. Anne believes that cyber security is an essential part of software quality. After working several years in a security software development team in various duties such as testing, test management, training, network design and product owner tasks, Anne focused her career fully on cyber security. In her current job at Nixu Corporation, Anne divides her time between hacking and threat analysis - although as network geek, she will also ensure that your network architecture is secure. Anne also has experience on incident response and security awareness after working in the National Cyber Security Centre of Finland. Anne holds a Master of Science (Technology) degree in Communication Networks and Protocols from Tampere University of Technology, Finland.
A Quick Start Guide to Test Automation
Development teams nowadays are trying to do more and more with less. Many people are wearing multiple hats and have to get projects delivered quickly. Test automation is one of those development areas impacted by this. This is no surprise as the trajectory for test automation doesn’t appear to be slowing down, every team is looking to add some test automation into their software development approach.
The questions they find themselves asking are not unique, however - with the myriad of advice, guides, tools and approaches out there, it can become overwhelming. Add in the fact that test automation can become a rabbit hole really fast if not executed correctly, it’s imperative to take a structured approach, taking your context into consideration.
In this talk, I’m going to share my experiences and approaches to help you create a test automation strategy that prioritises for maximum value and allows for a quick return on investment (ROI). I won’t be pushing any particular tools or patterns, instead, I’ll be offering guidelines for quickly ramping up automation efforts, especially with limited resources.
The target audience for this talk is technical members of the development team such as Test Engineers, QA Engineers, Developers and Managers, specifically those on teams with limited resources. That being the case, there are principles in this talk that can be applied to anyone.
- Attendees will learn how to evaluate and create a test automation strategy with quick ROI.
- Attendees will learn how to prioritize test cases to automate for maximum value.
- Attendees will understand some common test automation pitfalls to avoid.
AI-Driven Testing In Production: Towards a Future of Self-Testing Systems
- Different approaches to testing in production safely and securely.
- How AI can leverage information from testing in production to simulate real-world testing scenarios.
- Benefits and challenges of testing in production.
- Why the systems of the future will need to themselves in production.