Reading:
Using an Agile definition of done to promote a quality culture
Tool of The Week : LambdaTest image
Test Intelligently and ship faster. Deliver unparalleled digital experiences for real world enterprises.

Using an Agile definition of done to promote a quality culture

Discover how an Agile Definition of Done can promote shared ownership and incremental quality at every stage of the SDLC.

Using an Agile definition of done to promote a quality culture image

In her previous article, “When the tester's away… the team can test anyway!” Emily shared her thoughts on avoiding writing handover documents in favour of helping the team think like testers, documenting testing activities, developing release confidence by encouraging the team to consider acceptable risks for deployment and getting UAT (User Acceptance Testing) involvement from internal stakeholders.

In this article, the journey continues, focusing on adding quality and promoting team ownership through structured conversations around the team's Definition of Done. The article uses a variety of Agile terminology, which is referenced at the bottom of the page.

Emily is a Principal Test Engineer and consultant, working across a variety of industries and teams. This influences her writing style, using the word engineer as a collective term for somebody contributing production code (software engineer, DevOps engineer, data engineer, etc.) and the word tester to represent a Swiss-Army-knife team member who can contribute automated tests, risk analysis, and facilitate mob testing (teaming) sessions.

How I incorporated quality milestones into my team’s definition of done

Introduction: What is the Agile definition of done, and why does it matter?

The "definition of done" usually refers to a document representing a shared understanding of all the criteria that must be met for a product increment, like a bug fix, user story or a feature, to be considered complete and ready for release. The Scrum guide adds that the definition of done is a formal description of the state of the increment when it meets the quality measures required for the product.

Use these steps to create a definition of done that ensures a shared understanding of the quality bar applicable to your projects.

Getting started: Understanding the team's use of an Agile task board

Often, teams orient themselves around Scrum, Kanban or Agile task boards, which provide a good starting point for discussions about testing at different stages of the software development life cycle (SDLC). This includes introducing shift-left testing (testing earlier, performing requirement analysis) and shift-right testing (testing later, with real users) in discussions about what the development team perceives to happen in each column of the board.

Approaching conversations with curiosity, I often use the board to understand the team's existing ways of working, pain points or where testing might be rushed or overlooked. Open conversations often reveal opportunities for testers to contribute through requirement analysis, regression testing, automation or investigating live issues through root cause analysis. Building these into the definition of done means the quality measures required are not the sole responsibility of testing professionals.

Testing the waters: What makes the feature “ready to test”?

If it’s difficult to talk about testing activities from that first step, it might feel as though features get thrown ‘over the wall’ to testers! One way to initiate conversations around the expectations the team holds of testing within the SDLC is to ask about the features nearly ready to be thrown over that imaginary wall.

If a feature has moved from "In Progress" to "Code review", this implies that some other criteria must be met for it to be considered code complete and ready for testing. Ask open questions about the process taking place before a feature is deployed into a test environment to surface implicit expectations. Ask about the testing that can’t be done locally, or where your work could overlap with what’s already been done. That approach will position you as helpful and trying to avoid duplication of effort, while quietly building trust and relationships with engineers.

Engineers might not know how to add to ‘quality’, or realise they are already contributing through code reviews, unit tests or snapshot tests. If engineers aren’t able to articulate requirements or identify activities that advance testing, pairing is also a great way to find out more. Pairing lets you observe how the team's engineers identify places for test coverage in inputs, data permutations or conditionals. 

The first step: Identify a single improvement area

Having recently joined a different team, newly onboarded engineers were assigned to work on resolving bugs during their first few weeks. This approach is intended to familiarise them with the codebase, understand the system’s expected behaviours and enable introductions and conversations with the team and various project stakeholders.

I noticed that picking up these bug cards was a frustrating experience. When the bugs were reported by product stakeholders or other engineers, the cards assumed project context and lacked sufficient details. Details like where and how to reproduce the issue, for example. This inspired an initiative to improve bug reporting that included steps to reproduce (starting with the issue location), clearly defined expected behaviours and testable acceptance criteria. The whole team reviewed poorly written cards, performed root cause analysis and followed the "5 whys" process. This gave the team a greater awareness of which features were testable and the acceptance criteria required to know when the card would be done.

Applying the definition of done: a worked example

Following the success of creating better quality bug cards, I was able to gradually start showcasing testing activities and the required quality of work throughout the SDLC. In this worked example, the columns (from left to right) are;

  1. New 
  2. Committed 
  3. In progress 
  4. Code review 
  5. Code complete 
  6. In testing 
  7. UAT ready 
  8. Done. 

This example goes through each column in turn, explaining how a feature card gets to each state and the quality that is built up incrementally.

New cards column

Often, new cards are created by just about everybody except you, the tester! This means starting to add quality from the left of the board, or even in the backlog, can be a real challenge.

Experience teaches me that it’s easier to rally for defining unambiguous cards when others also feel the frustration of missing details, edge cases or conditions not fully understood. Using a shared challenge to push for a higher quality bar encourages the whole team to improve the process.

The “new” quality bar

When new feature or bug cards are created, they must answer a simple question:

“If the author wasn’t here (to explain what they’ve written), would the team still share a clear understanding of the expected result achieved by its users?”

If not, the team isn’t going to be able to write good acceptance criteria. To ensure a shared understanding, new cards should include;

  • A useful title and description
  • Details of the system behaviour from its primary user’s perspective
  • Any required inputs or outputs
  • A clear explanation of why the feature is needed

For example: 

Title: “As a guest user, I can view the hotel's availability and prices before making a reservation”. 

Description: From the homepage, include a new simple search to direct more users to the main booking user journey.

  • Inputs: check-in date, duration of stay (in nights) and number of guests
  • Outputs: number of rooms available per room type and price per night. 

The output allows potential customers to make an informed decision about their stay, but the real business value lies in the implicit quality characteristics of the simple form, being easy to understand, secure and fast.

Clear, detailed cards help the team to achieve a higher quality bar by reducing confusion and highlighting requirements that follow the INVEST mnemonic - requirements that must be independent, negotiable, valuable, estimable, small and testable.

Committed cards column

Working at a consultancy, there is a requirement that clients commit to work items (features and bug fixes), as this represents how their budget is being spent. With that in mind, the team adopts a column on the board titled committed reflecting the client's sign off to a card's acceptance criteria, scope and budgeted effort.

Everybody in the project team should read the committed cards to understand them, including technical dependencies and the types of testing required. A range of perspectives helps to prevent misunderstandings and improve the likelihood that risks are identified, from the creation of test data to the usability of a feature.

Committing to testing

A Maersk Line case study demonstrates how an organisation can successfully “build the right thing and build the thing right” (read more from the reference links). To do this, engineering teams need to share a vision of success with the organisation. Clients enable this by sharing project aims so teams can deliver real value. In my current project, the key aims are to build trust and create a seamless user experience, due to current user frustration from logging into multiple external systems with inconsistent designs. This vision acts as a north star, helping to make sure the right problems are being solved.

Building the thing right also means having a culture where testers aren’t asked how something has been tested, or why a bug was missed because of lack of trust in the team. This starts in the committed phase where engineers flag risks, document clarifications and identify test cases or scripts. These are all added to the card which acts as a single source of truth, which has its own pros and cons!

Typically, we use a card field called “questions and clarifications” to document risks in short bullet points. For example:

  • Returning the price per night doesn’t include local taxes - should we add “excludes taxes and charges”?
  • The API call for availability and price per night can’t be cached
  • Adding more filters to the first search will impact future performance

This adds to overall team quality in lots of subtle ways, as everybody chips in to build the best possible products. It also helps to avoid conflicts as engineers aren’t hit with extra requirements, especially relating to quality characteristics (non-functional requirements) or edge cases, which might not be explicit. This might sound documentation-heavy, but in a consultancy context, it’s important to show the client testing outputs and ensure the whole team can pick up tasks if somebody is unavailable, meaning quality isn’t a single responsibility.

Cards in the ‘in progress’ to ‘code complete’ columns

Cards in progress are actively being worked on by engineers. Moving through code review and into code complete once feedback has been actioned. Throughout, testers should observe, question, and highlight things that contribute to quality while the feature is still taking shape. Ultimately, the engineers implementing features need to lead quality in this phase. But that doesn’t mean testers can’t influence it. 

I believe this part of the SDLC is where influencing quality and building relationships across the team can have the biggest impact, especially as testers can’t be across everything. This can be done through several ways of working:

Engineers marking their own homework

I don’t agree with the phrase “developers shouldn’t mark their own homework” when it comes to validating the known expected behaviours of features. Instead, it can help build a shared language around quality. Testers might notice:

  • Feature cards are updated to indicate where acceptance criteria have been actioned
  • Engineers updating or adding new test cases as features evolve
  • Test cases are supported by evidence like images, Gifs or screen recordings during engineer-led testing

This enables testers to know where to focus their efforts or allow engineers full ownership of low-risk features.

Highlighting code complexity

When engineers implement complex features, they gain a lot of context that could aid different types of testing. For example, hotel rooms are now displayed by (data-driven) popularity and then by price (lowest to highest). The engineers implementing the ordering logic (twin, double, family, suite) should ensure this is documented for others, especially if the feature is public-facing and might need to be part of a user guide. 

A quality byproduct of this habit is that engineers might talk a tester through their code, highlighting conditionals, loops or error scenarios, which helps to identify gaps and test cases.

Code analysis

At code reviews, tools like static analysis, linting, dependency checks and unit test thresholds stop bad code at the gate. Internal quality ensures software is easier to modify and maintain. This positively affects quality by ensuring that all features in the test environment aren’t going to be immediately refactored, avoiding retesting. 

If you’ve ever used SonarQube, the tool can highlight code smells and code complexity to promote conversations between engineers on the business logic being expressed in code, edge cases and error paths that might go unnoticed.

In the testing column

It might sound strange to write about how testing adds quality to the SDLC, but bear with me! If you’ve ever worked in a team where it feels like features are thrown over that imaginary wall, there are things testers can do to influence their teams and shift that dynamic.

Conversation starters

Instead of giving standup updates like “I’m writing automated tests” or “I’m testing the booking form”, share what problems and risks you’re exploring, the tools you’re using to do that and how it’s going. This can start more detailed conversations about focus areas, potential bugs and what you’ve learnt about the system. This can start to demystify the art of testing and give engineers an appreciation for the role.

Testimony time

Document the types of testing being done, even if this falls under broad headings relating to API tests, automation, device, cross-browser, accessibility, performance, load, security, etc. This showcases that testing is not just validation. Provide evidence of the testing done to provide information and understanding to others.

Another way to document testing is to link test cases to cards. Then if, (when?) tests fail, the team can trace it back to the business logic that should be implemented, highlighting the difference between required updates and regression bugs.

Show and tell (show and test?): How to share your testing insights with the team

Promote and showcase your work before pairing with engineers, to encourage and enable others to do exploratory testing, write automated tests and debug any regression issues.

For example, investigating our bug pattern analysis showed that many issues were tied to users seeing rooms not applicable to their search after applying pet-friendly accommodation filters. By highlighting the cause of bugs, engineers were able to see where additional attention was needed and help build up end-to-end test coverage.

Similarly, testers can demonstrate their value by adding to overall quality when automated tests prove their worth and give engineers confidence that testing finds issues before they impact customers. This is helped by avoiding long pipeline run times, flaky tests and ensuring tests run on an appropriate trigger.

User Acceptance Testing ready column 

UAT is designed to ensure a product meets real needs before being released to all users. It’s often approached differently by organisations. Cards marked as UAT ready must be user-facing and available in an appropriate environment. Some teams use beta tests or feature flags, whereas others get UAT sign-off from a group of project stakeholders. In a past project, UAT was considered passed when a feature had been live to a third of users for a month without any critical bugs found.

Teaching engineers or business stakeholders methods to perform UAT on features they didn’t build helps to increase test coverage, spreads system awareness and helps to spot assumptions made with additional context.

Testing like a tourist: Exploratory testing tips 

Test like you’re doing it for the (Insta) gram. Seek out all the beauty spots, focus on the shiny new features and their front-end implementation. The aesthetics, consistency and usability. Follow the trends along well-defined user journeys on your own devices and OS version.

There’s always one person who sticks to the guidebook, and user acceptance testers might do the same. Pull up the release notes or the user guide and follow each step. Doing this might highlight an edge case between the previous and new functionality.

Testing your feedback loops

UAT can help teams analyse the observability put in place, for example, through logs, metrics, graphs and alerts. Having this as part of the definition of done helps teams refine quality before features reach a wider audience or issues are missed due to the noise of production logs.

To summarise the definition of done

A completed item tells stakeholders that the quality measures required for the product have been done.

Below are some examples of what definitions of done can include, using examples from our DoD and some generic ones. 

  • All acceptance criteria have been tested and evidenced.
  • Front-end features match the style guide.
  • Code meets the project’s required quality characteristics, e.g. a page load time of less than one second.
  • All cards are exploratory tested.
  • Tests have been executed on appropriate devices and / or browsers. 
  • New business logic is tested through unit and API, integration or end-to-end automated tests.
  • The code has been checked for security vulnerabilities. 
  • New automated tests are tagged to run on an appropriate schedule, they pass in the pipeline before being merged, and pipeline run time is evaluated at code review.
  • The feature has been used (in UAT) by a variety of users without training and by users following the release notes or user guide.
  • The card does not have any critical bugs against it.
  • Any refactoring or code clean-up has been completed. 
  • The card is in a deployable state. 

References

 

Principal Test Engineer
I have a sixth sense for bugs, probably due to my experience as a dev (introducing them)! Principal Test Engineer and Playwright fan-girl.
Comments
Tool of The Week : LambdaTest image
Test Intelligently and ship faster. Deliver unparalleled digital experiences for real world enterprises.
Explore MoT
Leading with Quality image
Tue, 30 Sep
A one-day educational experience to help business lead with expanding quality engineering and testing practices.
Improving Your Testing Through Operability image
Gain the tools you need to become an operability advocate. Making your testing even more awesome along the way!
Leading with Quality
A one-day educational experience to help business lead with expanding quality engineering and testing practices.
This Week in Testing image
Debrief the week in Testing via a community radio show hosted by Simon Tomes and members of the community
Subscribe to our newsletter
We'll keep you up to date on all the testing trends.