TestBash Detroit 2020

Ministry of Testing has partnered with the wonderful Hilary Weaver-Robb to bring you a lineup that is sure to bring learning opportunities for people at all levels in their careers.

We’re opening the week with the 3-day Automation in Testing course with Richard Bradshaw and Mark Winteringham. We follow that with 3 full-day workshops and conclude the week with our beloved single track conference day, TestBash, where we’ll have nine thought-provoking talks and our Community Space for added learning opportunities in the breaks. Workshops will take place in the Marriott. TestBash will be in St Andrews Hall.

Pro Ministry of Testing members get an additional $75 off the workshops and conference day for TestBash Detroit! Not Pro? Sign up today and save on your TestBash tickets, but also get access to every past TestBash talk, online courses and a whole host more.

Quick Look

Speakers

Huib Schoots
Huib Schoots
Tester, Coach, Consultant and Trainer
Lisa Crispin
Lisa Crispin
Testing Advocate
Ashley Hunsberger
Ashley Hunsberger
Director of Release Engineering
Hemory Phifer
Hemory Phifer
Senior IT Trainer
Lee Caldwell
Lee Caldwell
Software Development team leader and scrum master
Mark Winteringham
Mark Winteringham
DojoBoss
Richard Bradshaw
Richard Bradshaw
BossBoss
Katrina Ohlemacher
Katrina Ohlemacher
Senior Quality Engineer
Jenny Bramble
Jenny Bramble
Software Test Engineer
Maciek Konkolowicz
Maciek Konkolowicz
Software Quality Architect
Phil Wells
Phil Wells
Senior Software Engineer
Katy Farmer
Katy Farmer
Developer Advocate
Nishi Grover Garg
Nishi Grover Garg
Evangelist & Head of Trainings, Sahi Pro
Anne Oikarinen
Anne Oikarinen
Senior Security Consultant
John Dorlus
John Dorlus
Sr. Developer in Test
Tariq King
Tariq King
Founder and CEO

Schedule

Monday, 20th April 2020

Training

What Do We Mean By ‘Automation in Testing’?

Automation in Testing is a new namespace designed by Richard Bradshaw and Mark Winteringham. The use of automation within testing is changing, and in our opinion, existing terminology such as Test Automation is tarnished and no longer fit for purpose. So instead of having lengthy discussions about what Test Automation is, we’ve created our own namespace which provides a holistic experienced view on how you can and should be utilising automation in your testing.

Why You Should Take This Course

Automation is everywhere, it’s popularity and uptake has rocketed in recent years and it’s showing little sign of slowing down. So in order to remain relevant, you need to know how to code, right? No. While knowing how to code is a great tool in your toolbelt, there is far more to automation than writing code.

Automation doesn’t tell you:

  • what tests you should create
  • what data your tests require
  • what layer in your application you should write them at
  • what language or framework to use
  • if your testability is good enough
  • if it’s helping you solve your testing problems

It’s down to you to answer those questions and make those decisions. Answering those questions is significantly harder than writing the code. Yet our industry is pushing people straight into code and bypassing the theory. We hope to address that with this course by focusing on the theory that will give you a foundation of knowledge to master automation.

This is an intensive three-day course where we are going to use our sample product and go on an automation journey. This product already has some automated tests, it already has some tools designed to help test it. Throughout the three days we are going explore the tests, why those tests exist, our decision behind the tools we chose to implement them in, why that design and why those assertions. Then there are tools, we'll show you how to expand your thinking and strategy beyond automated tests to identify tools that can support other testing activities. As a group, we will then add more automation to the project exploring the why, where, when, who, what and how of each piece we add.

What You Will Learn On This Course

Online
To maximise our face to face time, we’ve created some online content to set the foundation for the class, allowing us to hit the ground running with some example scenarios.

After completing the online courses attendees will be able to:

  • Describe and explain some key concepts/terminology associated with programming
  • Interpret and explain real code examples
  • Design pseudocode for a potential automated test
  • Develop a basic understanding of programming languages relevant to the AiT course
  • Explain the basic functionality of a test framework

Day One
The first half of day one is all about the current state of automation, why AiT is important and discussing all the skills required to succeed with automation in the context of testing.

The second half of the day will be spent exploring our test product along with all its automation and openly discussing our choices. Reversing the decisions we’ve made to understand why we implemented those tests and built those tools.

By the end of day one, attendees will be able to:

  • Survey and dissect the current state of automation usage in the industry
  • Compare their companies usage of automation to other attendees
  • Describe the principles of Automation in Testing
  • Describe the difference between checking and testing
  • Recognize and elaborate on all the skills required to succeed with automation
  • Model the ideal automation specialist
  • Dissect existing automated checks to determine their purpose and intentions
  • Show the value of automated checking

Day Two
The first half of day two will continue with our focus on automated checking. We are going to explore what it takes to design and implement reliable focused automated checks. We’ll do this at many interfaces of the applications.

The second half of the day focuses on the techniques and skills a toolsmith employs. Building tools to support all types of testing is at the heart of AiT. We’re going to explore how to spot opportunities for tools, and how the skills required to build tools are nearly identical to building automated checks.

By the end of day two, attendees will be able to:

  • Differentiate between human testing and an automated check, and teach it to others
  • Describe the anatomy of an automated check
  • Be able to model an application to determine the best interface to create an automated check at
  • How to discover new libraries and frameworks to assists us with our automated checking
  • Implement automated checks at the API, JavaScript, UI and Visual interface
  • Discover opportunities to design automation to assist testing
  • An appreciation that techniques and tools like CI, virtualisation, stubbing, data management, state management, bash scripts and more are within reach of all testers
  • Propose potential tools for their current testing contexts

Day Three
We’ll start day three by concluding our exploration of toolsmithing. Creating some new tools for the test app and discussing the potential for tools in the attendee's companies. The middle part of day three will be spent talking about how to talk about automation.

It’s commonly said that testers aren’t very good at talking about testing, well the same is true about automation. We need to change this.

By the end of day three, attendees will be able to:

  • Justify the need for tooling beyond automated checks, and convince others
  • Design and implement some custom tools
  • Debate the use of automation in modern testing
  • Devise and coherently explain an AIT strategy

What You Will Need To Bring

Please bring a laptop, OS X, Linux or Windows with all the prerequisites installed that will be sent to you.

Is This Course For You?

Are you currently working in automation?
If yes, we believe this course will provide you with numerous new ways to think and talk about automation, allowing you to maximise your skills in the workplace.
If no, this course will show you that the majority of skill in automation is about risk identification, strategy and test design, and you can add a lot of value to automation efforts within testing.

I don’t have any programming skills, should I attend?
Yes. The online courses will be made available several months before the class, allowing you to establish a foundation ready for the face to face class. Then full support will be available from us and other attendees during the class.

I don’t work in the web space, should I attend?
The majority of the tooling we will use and demo is web-based, however, AiT is a mindset, so we believe you will benefit from attending the class and learning a theory to apply to any product/language.

I’m a manager who is interested in strategy but not programming, should I attend?
Yes, one of core drivers to educate others in identifying and strategizing problems before automating them. We will offer techniques and teach you skills to become better at analysing your context and using that information to build a plan towards successful automation.

What languages and tools will we be using?
The current setup is using Java and JS. Importantly though, we focus more on the thinking then the implementation, so while we’ll be reading and writing code, the languages are just a vehicle for the context of the class.

Mark Winteringham

I am a tester, coach, mentor, teacher and international speaker, presenting workshops and talks on technical testing techniques. I’ve worked on award winning projects across a wide variety of technology sectors ranging from broadcast, digital, financial and public sector working with various Web, mobile and desktop technologies.

I’m an expert in technical testing and test automation and a passionate advocate of risk-based automation and automation in testing practices which I regularly blog about at mwtestconsultancy.co.uk and the co-founder of the Software Testing Clinic. in London, a regular workshop for new and junior testers to receive free mentoring and lessons in software testing. I also have a keen interest in various technologies, developing new apps and Internet of thing devices regularly. You can get in touch with me on twitter: @2bittester


Richard Bradshaw
Richard Bradshaw is an experienced tester, consultant and generally a friendly guy. He shares his passion for testing through consulting, training and giving presentation on a variety of topics related to testing. He is a fan of automation that supports testing. With over 10 years testing experience, he has a lot of insights into the world of testing and software development. Richard is a very active member of the testing community, and is currently the FriendlyBoss at The Ministry of Testing. Richard blogs at thefriendlytester.co.uk and tweets as @FriendlyTester. He is also the creator of the YouTube channel, Whiteboard Testing.

Thursday, 23rd April 2020

Workshops

All Day Sessions | 9:00am - 5:30pm
The quick wit and sharp observational skills of Sherlock Holmes used to analyze and solve the greatest mysteries is legendary. And even though Sherlock Holmes often expressed a need for the sleuthing to stick to the facts, his actions would often demonstrate that he was very reliant on his intuition as well, and clearly saw both logic and intuition as equal partners in solving the mysteries before him.

“How should we test this?” is one of the toughest mysteries a practicing tester can encounter. To answer this question we need to consider our context and devise a workable strategy. This is a skill that is seldom taught and much of the related literature is weak and uninspiring. To develop this important skill, this workshop is devised where we will sharpen your strategic thinking skills to enable you to create a baseline test strategy for a product in time it takes to drink a cup of tea.

During this interactive workshop we will work in groups to create heuristics that will work as a mind palace: resources that enable us to remember and think fast. We will also work on fast context analysis and modelling your context. Equipped with these tools, the groups will create test strategies for a series of project contexts. We will present and debrief our strategies, think critically about the presented strategies and collectively improve our test strategy skills. This is going to be fast-paced and a lot of fun.

Don’t worry if you think you’ve never created a test strategy before or that you can’t create a solid test strategy in the time it takes to drink a cup of tea. It’s not magic, you know. We’ll start nice and easy and allow you more time to work on strategies but as the workshop progresses and your skills develop, we’ll increase the complexity of the context and shorten the timescales. By the end of the session you’ll feel like Sherlock Holmes, ready to tackle any project context in the blink of an eye!

Takeaways

  • Learn about heuristic to use to create test strategies
  • Learn how to create a test strategy fast
  • Improve you thinking skills
Huib Schoots
Huib Schoots is a coach, consultant, tester and people lover. He shares his passion for software development and testing through coaching, training, and giving presentations on a variety of agile and test subjects. Huib believes that working together in the workplace ultimately makes the difference in software development. He, therefore, helps people in teams to do what they are good at: by empowering them and help them continuously improve teamwork. Curious and passionate, he is an agile coach and an exploratory and context-driven tester who attempts to read everything ever published on software development, testing and agile. Huib maintains a blog on magnifiant.com and tweets as @huibschoots. He works for de Agile Testers: an awesome place where passionate and truly agile people try to make the world a better place. He has a huge passion for music and plays trombone in a brass band.
Is your team puzzling over how to feel confident releasing to production frequently with continuous delivery (CD)? Delivering reliable and valuable software frequently, at a sustainable pace (to paraphrase Elisabeth Hendrickson), is a worthy goal. DevOps is a hot buzzword, but many teams struggle with how to fit testing in. Everyone talks about building a quality culture,  but how does that work?

In this hands-on workshop, participants will have a chance to practice techniques that can help teams feel confident releasing more frequently. You’ll practice using frameworks and conversation starters together with your team to discuss what questions each step in your delivery pipeline needs to answer, and to understand the value each step provides. All materials used are freely available online so participants can try them with their own teams.

You’ll learn the language of DevOps so you can collaborate with all delivery team members to grow your DevOps culture and infrastructure. You'll work in small groups to come up with new experiments to overcome problems like how to complete manual testing activities and still do CD, how to shorten feedback cycles, how make sure all essential types of testing are done continually, and how to fit testing into the continuous world by engaging the whole team. You’ll learn that there IS a “test” in “DevOps”.

Whether​ ​your​ ​tests​ ​take​ ​minutes​ ​or​ ​days,​ ​and​ ​whether​ ​your​ ​deploys​ ​happen​ ​hourly​ ​or​ ​quarterly, you’ll​ ​discover​ ​benefits.​ The tutorial will include an overview of techniques for "testing in production". ​You’ll​ ​participate​ ​in​ ​a​ ​simulation​ ​to​ ​visualize​ ​your​ ​team’s​ ​current​ ​path​ ​to production​ ​and​ ​uncover​ ​risks​ ​to​ ​both​ ​your​ ​product​ ​and​ ​your​ ​deployment​ ​process.​ ​No​ ​laptops required,​ ​just​ ​bring​ ​your​ ​curiosity.

Takeaways

  • Continuous delivery concepts at a high level, and the differences between continuous integration and continuous delivery
  • Common terminology and a generic question list to engage with pipelines as a practice within your team
  • How to use the Test Suite Canvas to design a pipeline that gives your team confidence to release frequently
  • Experience in analyzing pipelines from different perspectives to create a layered diagram of feedback loops, risks mitigated, and questions answered
  • Ways your team can design experiments to address the many challenges of testing in a continuous world
 
Lisa Crispin

Lisa Crispin is the co-author, with Janet Gregory, of Agile Testing Condensed , More Agile Testing: Learning Journeys for the Whole Team (2014), Agile Testing: A Practical Guide for Testers and Agile Teams (2009), the LiveLessons Agile Testing Essentials video course, and “The Whole Team Approach to Agile Testing” 3-day training course. She co-authored Extreme Testing (2002) with Tip House. She is a contributor to Experiences of Test Automation by Dorothy Graham and Mark Fewster (Addison-Wesley, 2011), Beautiful Testing (O’Reilly, 2009) and other books. Lisa was voted by her peers as the Most Influential Agile Testing Professional Person at Agile Testing Days in 2012. She enjoys helping people find ways to build more quality into their software products, as well as hands-on testing. Please visit www.lisacrispin.com and www.agiletester.ca for more.


Ashley Hunsberger

Ashley is the Director of Release Engineering at Blackboard, Inc. a leading provider of educational technology, where she leads efforts to enable teams throughout the organisation get to production as fast as possible, with as high a quality as possible. She shares her experiences in testing and engineering productivity through writing and speaking around the world.

A proponent of open source, Ashley believes in giving back to the software community and serves as a member of the Selenium Project Steering Committee and co-chair of the Selenium Conference.


Many of today’s applications are shifting towards using APIs to help facilitate communication of data. Whether it is being used to communicate with micro-services internally or to integrate with external clients, leveraging API’s are becoming increasingly prevalent in today’s tech-forward companies. As quality champions, it is important that we have a baseline knowledge of what API’s are, how they work, and how to test them.

Moving from Software Quality Analyst (SQA) to Software Quality Engineer (SQE): Introduction to API Testing is a hands-on workshop focused on testing API’s utilizing Postman and C#.

Prerequisites:
Basic knowledge of C#, testing, and API’s is encouraged but not necessary. This workshop is designed to be foundational.

Takeaways

  1. Explain what an API is, and identify the four main HTTP request methods: GET, POST, PUT, DELETE
  2. Explain the benefits of API testing and best practices
  3. Demonstrate the features of Postman as a tool (collections, environments, test, debugging)
  4. Utilize Postman’s scripting feature to automate testing and workflow
  5. Build tests to address response code, functionality, performance, and security
  6. Compose some basic API tests in C#
Hemory Phifer
Hemory Phifer is a Senior IT Trainer at United Shore where he is afforded the opportunity to train the next generation of developers. He himself a Software Developer turned People Developer, transitioned from developing software at Quicken Loans to spearheading an in-house training initiative that trained team members to become software developers. Passionate about growing in technology and growing people, Hemory has focused his talents into bringing technology training to the community through initiatives like ExperienceIT, a free software development boot camp in partnership with several technology companies in downtown Detroit. Hemory also co-founded DevYou with fellow co-founder Leonidas Caldwell, a start-up dedicated to bringing foundational programming knowledge through Java and C#. As one who deems himself “In permanent beta”; Hemory is dedicated to personal growth and cultivating others.
Lee Caldwell
Lee Caldwell is a Software Development team leader and scrum master at Amrock. Previously, worked as a Software Quality Engineer during the company’s focused efforts to transition from supporting Software Quality Analysts to supporting Software Quality Engineers; Lee embraced and helped drive the SQE culture during the company’s transition. Recently Lee combined his passion for teaching with his passion for software development to co-found DevYou, a start-up that focuses on teaching introductory software development skills. Having a background in education Lee brings an engaging twist and unique style to teaching software development concepts.

Friday, 24th April 2020

Conference

Never trust to general impressions, my boy, but concentrate yourself upon details. So says Sherlock Holmes in “A Case of Identity.” The Great Detective has a lot to say that applies to testing: pay attention to detail, never make assumptions, use your imagination. We wouldn’t have Sherlock Holmes without good writing and great storytelling, and this talk has a lot to say about how using your own writing and storytelling skills makes for better reporting, better reproducibility, better team dynamics and better overall testing. I will cover note-taking, bug reports, client communication and more, all with help from Baker Street’s most famous resident. 

Takeaways

  • Using a narrative style in note-taking makes it easier to produce test artifacts and reproduce bugs
  • In a world of Slack and email, thoughtful writing and storytelling improve team dynamics and communication about bugs and defects
  • Writing effectively and elegantly allows you to communicate directly with clients and product owners in a way that enhances the status of QA on a project
Katrina Ohlemacher
Katrina Ohlemacher is a traveler, a writer, a procrastinator, and any number of other nouns. Hers is a tale as old as time: After working at newspapers and nuclear power plants, she made the leap into Quality Assurance by quitting her job and attending a boot camp with no back-up plan. But as Hunter S. Thompson says, God watches out for fools and sportswriters, and she eventually landed at Detroit Labs. She would like to note that the Oxford comma has been added to this bio over her vehement protests.
Every metric for a QA team has pitfalls.
 
Some are combative and drive a wedge between departments. Some are useless or easily gamed. Some don't make any sense at all. So what's an agile team to do if they want to measure the quality of the software they are working so hard to produce? 
 
The speaker suggests looking inward, past all the obvious metrics and the the heart of the team. The team's morale can accurately predict the quality of the software they produce. In this talk, she will discuss some common metrics and their pit falls before making the case for morale as the top QA metric. She'll show how to measure changes in morale over time and what you can do to help increase the morale--and thus quality!--of your team.

Takeaways

  •  Pitfalls of commonly used metrics: no. bugs found, production defects, time to resolution...
  •  Morale as a meaningful metric: studies have shown high preforming teams are teams with high morale/psychological safety
  •  Measuring morale in significant ways: surveys, team discussions, retros
  •  Increasing morale to increase quality
 
Jenny Bramble

Jenny came up through support and DevOps, cutting her teeth on that interesting role that acts as the 'translator' between customer requests from support and the development team. Her love of support and the human side of problems lets her find a sweet spot between empathy for the user and empathy for my team.

She's done testing, support, or human interfacing for most of her career. She finds herself happiest when she's making an impact on other people--whether it's helping find issues in applications, leading scrum, speaking at events, or just grabbing a coffee and chatting.


You are born, you grow up (a bit) and then you become a quality champion...right? Wrong. The career path for most of us is dark and full of tech terrors. As quality advocates, it is our responsibility, to each day push further along the road to quality nirvana by making ourselves better, faster and technically stronger. 

Come hear my story about how my journey from tester, to influencer made me an effective quality champion. I will present methods to market and implement quality I've used to influence friendly and hostile people alike, push them into the quality mindset, and present ideas that sparked the growth of individual and team-wide initiatives. If you ever thought of what the journey from tester to quality champion looked like, or if that snazzy title really means anything, come check out this talk.

Takeaways

A strategy for removing yourself from day to day testing activities, and morphing into the role of a quality coach. Specifically: 
  1. A series of examples of how empowering other team members through workshops and appropriate planning allowed me to focus on coaching and driving strategic initiatives.
  2. A Template for an effective test strategy, to be executed by developers, business analysts or team leaders.
Maciek Konkolowicz
Maciek has been a quality champion his entire professional life. For many years, he’s been focusing on learning, implementing, showing and spreading the idea of quality championship to whoever he can corner, be it Dev, QA, BA, or even Project Managers. He’s a passionate technologist who loves to externalize his thoughts to gain perspectives of others. He has spoken at local meetups and conferences and loves to share his passion for the quality crusade.

I'm a senior software engineer for a large publications crosswords and games group. In this diabolical anti-talk, I'll reveal the time-worn nasty secrets handed down through generations of test and QA professionals. Automation engineers love measuring things like code coverage, browser coverage, build time, and pass/fail rates for one very simple reason.

These stats can be cheated.

Ever the bad influence, I'll lead a seminar in how to use these metrics to drive a practice that is solely for the purpose of beefing up those metrics. Real software testing, they'll reason, is not about limiting rework or ensuring confidence in a team's software delivery process. It's about those numbers. It's about validating the existence of the practice itself. More than anything else, it's about looking out for number one.

Then, at the end, I'll talk about a couple of metrics that are actually fine and how we can measure those.
 

Takeaways

  • How to maximize test coverage without doing a lot of actual testing
  • How to ignore which platforms your users are one and test every possible browser/OS/device config available, damn the costs.
  • How to keep a lot of quick, stable tests in your test suite by ignoring the fact that they don't provide any confidence in your product releases.
  • The Secret of the Eternal Janitor: If there are always flaky tests to fix, they'll always need someone on the team who can fix flaky tests.
  • But seriously: What are some actual non-satirical good metrics to use and how can we track them?
 
Phil Wells
Phil Wells has been a software quality practitioner for over a decade. Now, Phil is a senior software engineer with the New York Times crosswords and games team. This team maintains the most popular crossword product in the world. Phil works to ensure that this team builds quality into every new feature and game they deliver. Phil likes to go beyond writing tests and building infrastructure for delivery. He also acts as a coach for his peers in web development, teaching and advocating for modern test practices and technologies. People have all sorts of funny ideas about what Phil does every day. Phil does not construct the puzzle content for the crosswords. Phil does not program an AI to solve crosswords, although that would be awesome. Phil does not know Will Shortz. If you see Phil walking around the conference, feel free to say, "Hi, Phil!"
The fastest way to get me to change the topic in a conversation is to compliment me. I will twist, turn, and segue into weather, sports I don’t understand, or Marvel Comics conspiracy theories before I will acknowledge a compliment. I will suddenly become a fount of wildlife trivia, take up jogging, or pretend I’m getting a phone call. I’ve always been like this, but when I transitioned into engineering, I started to notice the people around me doing it, too--especially people who fell into underrepresented groups--and I started to get mad. Why is it so hard for us to value our own work like we value the work of others?
 
In this talk, we’ll explore what it means to value our personal and professional achievements, and how higher self-worth makes us better teammates and empathetic leaders. We’ll talk about the many ways in which I have failed and the lessons I’ve learned as I began to care about and for myself. This talk will feature jokes (humor is my preferred coping mechanism) and hand-drawn slides (to keep anyone from looking too closely at me). More importantly, the audience will learn how valuing ourselves leads us to value each other.
 
This talk doesn’t end with an epiphany. It doesn’t end with revelatory tech or a link to a download. It ends with two simple words that are a seed for something bigger: be brave.
 

Takeaways

  1. You can learn to be as kind to yourself as you are to others.
  2. Empathy is part of leadership.
  3. Lifting others up lifts you up, too.
Katy Farmer
Katy lives in Oakland, CA with her sweetheart and two dogs (at least one of whom talks to her about fun, technical things). She loves to break stuff, try to fix it, and then break it again. Ask her about: Ruby, Javascript, Russian Literature, Star Wars, Dragon Age, and all things sugar.
When I first heard about risk-based testing, I interpreted it as an approach that could help devise a targeted test strategy. Learning about risk-based testing can give us a new approach to our testing challenges. Even though Risk-Based testing may ideally be a bigger undertaking, beginning it by simply analyzing the product as well as each sprint for the impending risk areas and then following them through during test design and development, execution and reporting would help us in time crunches. 
But before I could think about adopting this approach into our test planning, I had a challenge at hand--to convince my team. I would like to share how I convinced my team about it by using their own case study, using our previous sprint’s data, defect counts based on user stories and calculating risk priority numbers. You too can reverse engineer your way to adopting a simple, no-frills risk-based testing approach!
 

Takeaways

  • Analyzing a team’s sprint history in terms of risk
  • Calculating Risk Priority Number (RPN) and the defining Extent of Testing 
  • Finding risk areas and Re-focussing testing effort on high-risk areas
  • A simplistic, no-frills approach to Risk-based testing
 
Nishi Grover Garg
Nishi is a corporate trainer, an agile enthusiast and a tester at heart! With 11+ years of industry experience, she currently works with Sahi Pro software as an Evangelist and Trainings Head. She is passionate about training, organizing testing community events and meetups, and has been a speaker at numerous testing events and conferences. Nishi is also a writer on technical topics of interest in the industry and has numerous articles published at numerous popular forums and her own blog https://testwithnishi.com/ where she writes about the latest topics in Agile and Testing domains. Please connect with her on Twitter (testwithnishi) and Linkedin - https://www.linkedin.com/in/nishi-g-02127aa/
Are you tired of fixing security bugs afterwards in a hurry? Have you gone through depressing penetration testing reports too many times? Evil user stories are a way of addressing security threats in the planning and implementation phase.

The idea of evil user stories is simple: First, identify important data and assets in the application you are protecting. Then, identify threat scenarios by completing the sentence "An attacker should not be able to...". 

You can use evil user stories in development by putting them in the backlog and adding mitigations as acceptance criteria. This helps in implementing security together with functionality. In addition, they are a good starting point for test planning and getting testers involved in design. 

You will learn to create evil user stories from different attacker perspectives and will be able to make security efforts visible in the backlog which is a step closer to building security in. 

Takeaways

Key learnings:
  • How to create evil user stories to find potential threats on the system you are protecting
  • Evil user stories make security work visible on the backlog and security features get implemented alongside functionality
  • Evil user stories can be used as test planning aid
  • Different methods of finding attacker perspectives 
Anne Oikarinen
Anne Oikarinen is a Senior Security Consultant who works with security and software development teams to help them design and develop secure software. Anne believes that cyber security is an essential part of software quality. After working several years in a security software development team in various duties such as testing, test management, training, network design and product owner tasks, Anne focused her career fully on cyber security. In her current job at Nixu Corporation, Anne divides her time between hacking and threat analysis - although as network geek, she will also ensure that your network architecture is secure. Anne also has experience on incident response and security awareness after working in the National Cyber Security Centre of Finland. Anne holds a Master of Science (Technology) degree in Communication Networks and Protocols from Tampere University of Technology, Finland.

Development teams nowadays are trying to do more and more with less. Many people are wearing multiple hats and have to get projects delivered quickly. Test automation is one of those development areas impacted by this. This is no surprise as the trajectory for test automation doesn’t appear to be slowing down, every team is looking to add some test automation into their software development approach.

The questions they find themselves asking are not unique, however - with the myriad of advice, guides, tools and approaches out there, it can become overwhelming. Add in the fact that test automation can become a rabbit hole really fast if not executed correctly, it’s imperative to take a structured approach, taking your context into consideration.

In this talk, I’m going to share my experiences and approaches to help you create a test automation strategy that prioritises for maximum value and allows for a quick return on investment (ROI). I won’t be pushing any particular tools or patterns, instead, I’ll be offering guidelines for quickly ramping up automation efforts, especially with limited resources.

Target Audience

The target audience for this talk is technical members of the development team such as Test Engineers, QA Engineers, Developers and Managers, specifically those on teams with limited resources. That being the case, there are principles in this talk that can be applied to anyone.

Takeaways

  • Attendees will learn how to evaluate and create a test automation strategy with quick ROI.
  • Attendees will learn how to prioritize test cases to automate for maximum value.
  • Attendees will understand some common test automation pitfalls to avoid.
John Dorlus
John is a dev gone QA who has worked on testing products used by millions worldwide such as Elasticsearch, GitHub and Firefox. John is currently at Elastic helping to develop test automation solutions in JavaScript. He loves tea, K Pop and anything having to do with cuff links. Find him on LinkedIn.
DevOps as a culture, movement and philosophy is leading to an increase in the practice of shifting testing to the right — towards production. Many organizations now use continuous integration and delivery pipelines to make decisions about production readiness and, once the software is released, leverage real-time monitoring for detecting and debugging issues. Testing in Production (TiP) has historically been the subject of great scrutiny due to its frequent association with insufficient pre-production testing. However, when applied appropriately, TiP can be a highly effective means of validation and verification. Unlike testing in a lab or staging environment, TiP provides feedback on system behavior using real user scenarios, data and configurations. Tariq King believes that the next-generation of test automation involves combining TiP with AI-driven testing techniques. In other words, the machines of the future will learn how to test by training on information gathered from real users acting in production environments.  Similarly, test scenarios will be executed in production environments to provide best effort simulations. Join Tariq as he explains different approaches to TiP with AI, its key benefits and challenges, and how he envisions these technologies moving us towards a future where systems and services test themselves.

Takeaways

  • Different approaches to testing in production safely and securely.
  • How AI can leverage information from testing in production to simulate real-world testing scenarios.
  • Benefits and challenges of testing in production.
  • Why the systems of the future will need to themselves in production.
Tariq King
Tariq King is the founder and CEO of Selftest IO, a company on a mission to develop the next-generation of systems and services with intrinsic self-testing properties. Tariq has over fifteen years' experience in software testing research and practice, and has formerly held positions a test architect, engineering manager, director, and head of quality. Tariq holds Ph.D. and M.S. degrees in Computer Science from Florida International University (FIU). His areas of research are software testing, artificial intelligence, autonomic and cloud computing, model-driven engineering, and computer science education. He has published over 40 research articles in peer-reviewed IEEE and ACM journals, conferences, and workshops, and has been an international keynote speaker at leading software conferences in industry and academia. He is the co-founder with Jason Arbon of the Artificial Intelligence for Software Testing Association.