MindMap: Manual Testing in an Agile Environment

This week we have a MindMap courtesey of Matt Archer who is also running a Manual Testing in an Agile Environment course with us in London later this year, how convenient :)

This MindMap is a bit of a gem, or monster as we refer to it.  It’s big and full of useful advice for any tester working in an agile and manual testing environment environment.

The MindMap image is below, followed by a checklist and a handy downloadable ZIP file containing various formats (PDF, Text, Word, PNG, jpg).

Manual Testing in an Agile Environment

Click for full size

 

Download (ZIP File 8.7 MB)

 

1. Common challenges experienced by manual testers in agile teams

1.1 The sprint comes to an end, but testing is not yet finished

1.1.1 Common Causes

1.1.1.1 Test execution started too late in the sprint
  • 1.1.1.1.1 Watch out for the “mini-waterfall” sprint
  • 1.1.1.1.2 Stories are too large
  • 1.1.1.1.3 People incorrectly assume testers only want to test a new feature once it’s “finished”
1.1.1.2 Test execution takes too long

1.1.1.2.1 Poor application testability

  • 1.1.1.2.1.1 Difficult to setup system state / pre-conditions
  • 1.1.1.2.1.2 Lack of reporting / logging for diagnostic purposes

1.1.1.2.2 Lack of tools / utilities for manual testing

  • 1.1.1.2.2.1 Log parsers
  • 1.1.1.2.2.2 Data generators
  • 1.1.1.2.2.3 Semi-automated oracles

1.1.1.2.3 Slow / unreliable process for creating test builds

  • 1.1.1.2.3.1 Testers told to get on with it themselves
  • 1.1.1.2.3.2 Minimal tool / automation support
  • 1.1.1.2.3.3 Builds are unusable when they arrive
  • 1.1.1.2.3.3.1 Little / no unit tests

1.1.1.2.4 Bug count allowed to escalate

  • 1.1.1.2.4.1 Unfixed bugs camouflage other bugs
  • 1.1.1.2.4.2 Unfixed bugs lead to duplicate effort
  • 1.1.1.2.4.3 Unfixed bugs distract the entire team
1.1.1.3 Too much time spent on test preparation
  • 1.1.1.3.1 Too documentation heavy
  • 1.1.1.3.2 Analysis paralysis
  • 1.1.1.3.2.1 Yes, it happens to testers too!
  • 1.1.1.3.3 Trying to prepare too far ahead
1.1.1.4 Team velocity based on coding only
  • 1.1.1.4.1 Testing is ignored / forgotten
1.1.1.5 Testers given poor advice
  • 1.1.1.5.1 “Just test the same way you’ve always tested”
  • 1.1.1.5.1.1 Some traditional testing practices are compatible with agile software development
  • 1.1.1.5.1.2 Others less so!

1.2 Testing is finished during the sprint, but confidence is low

1.2.1 Common Causes

1.2.1.1 Testing is quick, but ad-hoc

  • 1.2.1.1.1 Too little planning
  • 1.2.1.1.1.1 Proper planning
  • 1.2.1.1.1.1.1 Not just creating test plans from templates!
  • 1.2.1.1.2 Little thought given to how testing thoroughness and coverage will be measured
  • 1.2.1.1.2.1 Testing finishes when the sprint ends
  • 1.2.1.1.3 Testers given poor advice
  • 1.2.1.1.3.1 Just “sniff” around the areas not covered by the automated tests

1.2.1.2 Scope of the release is unclear

  • 1.2.1.2.1 Change is unstructured and uncontrolled
  • 1.2.1.2.2 Niche / subtle features added without tester’s knowledge

1.2.1.3 Testing performed by people with little testing experience

  • 1.2.1.3.1 Only the obvious bugs discovered
  • 1.2.1.3.2 Testing seen as a background task
  • 1.2.1.3.2.1 Performed on a best endeavours basis

2 Use models to aid rapid test design and keep a record of your tests

2.1 Models exist as part of test design techniques

2.1.1 Examples

  • 2.1.1.1 Boundary Value Analysis
  • 2.1.1.2 State Transition Testing
  • 2.1.1.3 Classification Trees
  • 2.1.2 Created by
  • 2.1.2.1 Testers
  • 2.1.3 Make the model (diagram) you’re testing preparation
  • 2.1.3.1 DON’T explicitly create any tests based on the model
  • 2.1.3.2 DO define the tests you want to run as a coverage target over the model
  • 2.1.3.2.1 “Test that a booking can be moved from every state to every other (valid) state”

2.2 Models exist as part of development and requirement techniques

2.2.1 Examples

  • 2.2.1.1 Activity diagrams
  • 2.2.1.2 Entity relationship diagrams
  • 2.2.1.3 Security matrixes

2.2.2 Created by

  • 2.2.2.1 Other members of the team
  • 2.2.3 make adding a coverage target to someone else’s model your test preparation
  • 2.2.3.1 An extremely fast way to define tests for a sprint
  • 2.2.3.1.1 And get feedback from others
  • 2.2.3.2 “Test all of the security permission described in the security matrix for all public facing roles (both positive and negative cases)”

2.3 Models exist in our minds

2.3.1 Our meta-models of the world around us

  • 2.3.1.1 And the systems we test

2.3.2 Use them to challenge / explore the system being tested and the physical artefacts used to describe them

2.3.3 Stress test your own mental models and the mental models of others using NLP

  • 2.3.3.1 Good source of test ideas for exploratory testing
  • 2.3.3.2 See “NLP for Testers” (Alan Richardson)

3 Consider converting test scripts to checklists

3.1 Test scripts focus on how to interact with the software to test it

  • 3.1.1 Often lengthy to write
  • 3.1.2 Often difficult to maintain

3.2 Checklist focus on what to test about the software and why it’s important

3.2.1 Quick to write

  • 3.2.1.1 As short as 1 line / sentence per test

3.2.2 Quick to maintain

3.3 Different types of checklist

3.3.1 The target of a checklist can vary

3.3.1.1 A feature

  • 3.3.1.1.1 Example
  • 3.3.1.1.1.1 “Account Management”
  • 3.3.1.1.2 Can be used to support the testing of a specific story, feature or function

3.3.1.2 A characteristic / category of features

  • 3.3.1.2.1 Example
  • 3.3.1.2.1.1 “All User Interfaces”
  • 3.3.1.2.2 Can be reused across the entire system
3.3.2 The focus of a checklist can vary
  • 3.3.2.1 Think “types of testing”
  • 3.3.2.2 Examples
  • 3.3.2.2.1 Positive
  • 3.3.2.2.2 Negative
  • 3.3.2.2.3 Functional
  • 3.3.2.2.4 Performance
  • 3.3.2.3 The list is unlimited
3.3.3 Checklist data can vary

3.3.3.1 Implicit

  • 3.3.3.1.1 Person performing the test provides the data in real-time
  • 3.3.3.1.1.1 Slower to execute
  • 3.3.3.1.1.2 But more variety over time

3.3.3.2 Explicit

  • 3.3.3.2.1 Suggestions for data values included in the checklist
  • 3.3.3.2.1.1 Quicker to execute
  • 3.3.3.2.1.2 But beware the repetition

4 Adopting new agile testing practices

4.1 Don’t just chase the buzz-words

  • 4.1.1 You know what they are!

4.2 For every practice you introduce or change, ask yourself…

4.2.1 “Will this help me provide meaningful, quality related feedback, faster… through predominantly manual, human-driven, activities?”

  • 4.2.2 “Am I reducing the wasteful aspect of my testing?”
  • 4.2.2.1 “Or adding more waste!?”

 

4.3 Avoid vanity metrics

  • 4.3.1 Number of teams using practice X
  • 4.3.2 Number of people who have attended training course Y
  • 4.3.3 Number of team members who hold certification Z
  • 4.3.4 Hours spent between teach members and agile coach

5 When you document, do so succinctly and with pace

5.1 Don’t repeat yourself (DRY)

  • 5.1.1 Do larges pieces of one test look like large pieces of another?

5.2 Do you really need it? (DYRNI)

5.2.1 Do you have too much detail in your tests?

5.2.2 Do you really need it?

5.2.3 Who is it for?

  • 5.2.3.1 You?
  • 5.2.3.2 Another team member?
  • 5.2.3.3 Just in case!?
  • 5.2.3.3.1 Beware (!)

5.3 Don’t get blocked (DGB)

5.3.1 Agile “requirements” are rarely intended to be analysed alone

  • 5.3.1.1 Remember the 3 Cs
  • 5.3.1.1.1 Card
  • 5.3.1.1.2 Conversation (!)
  • 5.3.1.1.3 Confirmation

5.3.2 Don’t allow yourself to get blocked or wonder “what if” for too long

  • 5.3.2.1 Find the correct person to have a conversation with
  • 5.3.2.1.1 Fill the knowledge gap

5.3.3 Create a “need-more-info” annotation that you can use when nobody is around

  • 5.3.3.1 “As a user I would like to be able to import personnel records from other systems”
  • 5.3.3.1.1 “What other system?”
  • 5.3.3.1.2 “I’ll find out later”
  • 5.3.3.2 “Test that the data integrity of a personnel record is maintained when it is importing from”
  • 5.3.3.2.1 “I’ll replace the text between my ‘need-more-info’ annotations (“

5.4.1 Optimise by relying on common information in other locations

  • 5.4.1.1 Wikis / whiteboards / tangible locations
  • 5.4.1.1.1 Test environment settings
  • 5.4.1.1.2 Test tool guides
  • 5.4.1.1.3 Training material
  • 5.4.1.1.4 Username / password vaults
  • 5.4.1.2 Your mind / experience!

6 Why manual testing? Don’t agile projects automate?

6.1 Automated tests can’t cover everything

6.1.1 Usability
6.1.2 Cross browser

  • 6.1.2.1 UI glitches
  • 6.1.2.2 Rendering problems
  • 6.1.2.3 Browsers / operating systems unsupported by automation tool

6.1.3 Style / branding

6.1.4 Mobile technologies

  • 6.1.4.1 Unsupported devices
  • 6.1.4.2 Unsupported interactions

6.2 Great test ideas come from manual interaction, exploration and investigation (i.e. manual testing!)

  • 6.2.1 Maybe these are ultimately automated
  • 6.2.2 Maybe not

6.3 Sometimes no automated tests exist

  • 6.3.1 Lack of desire
  • 6.3.2 Lack of skill
  • 6.3.3 “Lack of time”

6.4 Sometimes test automation coverage is low

  • 6.4.1 Moving from waterfall to agile
  • 6.4.2 The result of one or two hectic sprints
  • 6.4.2.1 Team has been focusing on “the essentials” (!?)

6.5 Sometimes automated tests fail / are unavailable

6.5.1 Too costly to update

6.5.2 Too unreliable to trust

6.5.3 Dependant on an ex-employee

6.5.4 But we still have to go ahead with a…

  • 6.5.4.1 demo
  • 6.5.4.2 release
  • 6.5.4.3 patch / upgrade

7 Things other team members can do to help

7.1 Being an Agile Manual Tester is difficult to do in isolation

  • 7.1.1 Many aspects of agile software development are geared towards collaboration
  • 7.1.2 Without collaboration, every team member feels the pain

7.2 Share everything

7.2.1 Examples
  • 7.2.1.1 Tangible
  • 7.2.1.1.1 Documents
  • 7.2.1.1.2 Repositories
  • 7.2.1.1.3 Notes from meetings

7.2.1.2 Intangible

  • 7.2.1.2.1 Time with the customer
  • 7.2.1.2.2 Solutions to common problems
  • 7.2.1.2.2.1 Setting up a local environment
  • 7.2.1.2.2.2 Educating new team members

7.2.1.2.3 Knowledge, experience and pain-points

7.2.2 An agile acid test

7.2.2.1 “I see we’re working on the same story, can I combine my info with yours?”

  • 7.2.2.1.1 Applies to every role combination
  • 7.2.2.1.1.1 Tester to tester
  • 7.2.2.1.1.2 Tester to developer
  • 7.2.2.1.1.3 Developer to tester
  • 7.2.2.1.1.4 (etc, etc)

7.2.2.2 “No, put it in your own document / repository”

  • 7.2.2.2.1 “Bad” agile

7.2.2.3 “Yes, feel free, would you mind reviewing my info whilst you’re adding yours?”

  • 7.2.2.3.1 “Good” agile

7.3 Improve manual testability

7.3.1 Log / report information that is useful for testers

7.3.2 Hidden “setup” screens

  • 7.3.2.1 For testing purposes only
  • 7.3.2.2 Enter / manipulate system data and state
  • 7.3.2.3 Simulate edge / error cases

7.4 Challenge manual tester

7.4.1 Provide a test build that’s already of high quality

7.4.1.1 Take unit testing seriously

  • 7.4.1.1.1 Make them part of “done”
  • 7.4.1.1.2 Run them regularly
  • 7.4.1.1.2.1 On every check-in

7.4.1.2 Fix bugs as soon as anyone finds them

  • 7.4.1.2.1 See The Testing Planet article, “10 reasons why you fix bugs as soon as you find them” (Matt Archer / Andy Glover)

7.5 Be able to deploy a test build on demand

7.5.1 “I’m ready to test the story you finished this morning that I see you’ve moved to ‘ready-for-testing’”

  • 7.5.1.1 “I’ll get it on the test environment for you by tomorrow”
  • 7.5.1.1.1 “Bad” agile

7.5.1.2 “OK, it’s already checked-in, run the automated build process and you’ll have it on the test environment in 15 minutes”

  • 7.5.1.2.1 “Good” agile

You might be interested in... 1 Day Python for Testers Course

MoT Courses-06

Tags: ,