Reading:
Delivering quality and deploying on time when your team is small

Delivering quality and deploying on time when your team is small

Transform your team’s approach to quality from a late-stage gatekeeper process into a proactive, shared responsibility that identifies risks early.

Delivering quality and deploying on time when your team is small image

When you don’t have time for “proper testing,” what can you actually do?

A few years ago, I was part of a small product team preparing to release a permissions-related change. On the surface, it looked straightforward. In reality, it touched critical areas  like user access, data visibility, and trust.

We didn’t have a dedicated tester at the time. Everyone's capacity was already stretched to the limit. The usual reaction would have been to slow down or attempt to test everything.

We did neither. Instead, we made a deliberate decision to test less but with much more conscious intent.

Happily, that release went out without any major issues. More importantly, it changed how we approached testing in every time-strapped team I’ve worked with since.

This article shares those experiences: what we tried, what failed, and what actually helped us deploy responsibly when time, people, and energy were limited.

The problem isn’t lack of care. It's a lack of focus on what's important.

In small teams, testing rarely gets short shrift because people don’t care about quality. It takes a back seat because everyone is juggling multiple responsibilities and delivery pressure never lets up.

The questions that kept coming up for us were familiar ones:

  • How much testing is enough?
  • What should we focus on when we can’t test everything?
  • How do we reduce bugs without delaying releases?

Most advice I found assumed ideal conditions: dedicated testers, plenty of time, and clear role boundaries. That was not our reality.

What we needed was a way to make product-level testing decisions under real constraints without pretending we had more capacity than we did.

How we learned to focus on user impact, not test coverage

Three questions that made all the difference

Back to that permissions release.

Our first instinct was to list every possible test case. Roles, edge cases, configurations, states. The list grew quickly, and so did our anxiety.

We didn’t have the time to execute all of those test cases properly. So we paused and reframed the conversation.

Instead of asking What should we test?”, we asked:

  • What would break user trust immediately if this failed?
  • Which failures would be visible and irreversible?
  • What could realistically be fixed safely after release?

In the case of this release, asking those questions led us to focus on two areas:

  1. Making sure users could not see data they weren’t meant to see. For example, preventing users from viewing other teams’ records or admin-only fields because of role or configuration edge cases
  2. Making sure users didn’t lose access they already had

We gave lower priority to cosmetic issues, lower-risk combinations of roles, feature states, and scenarios whose supporting code could be rolled back or patched quickly.

We also included checks that went beyond mere functionality:

  • Accessibility: verifying that permission-related messaging was readable by screen readers
  • Reliability: checking behaviour under partial failures
  • Performance: ensuring permission checks didn’t noticeably slow down core flows

The result was not perfect. Instead, it was safe, intentional, and aligned with the level of impact on end users if something went wrong.

Core user workflows behaved as expected. Users with valid access could complete their primary tasks without interruption. We did uncover rough edges in secondary flows, such as confusing error messages when permissions were partially applied. However, because these issues were visible but reversible, we accepted them and fixed them safely after release.

Replacing excessive documentation with product-focused checks

In another team, we tried to compensate for limited testing time by documenting more. Test cases multiplied. Confidence didn’t.

The problem wasn’t effort. It was how far testing had drifted from examining real user behaviour and risk.

Large test suites with a lot of test cases created the illusion of safety, but:

  • No one had time to read them properly
  • Developers didn’t understand why certain cases mattered. This is because they had to read a multi-page test suite covering every role and configuration. 
  • Testers spent more time maintaining documents than thinking critically

So we changed our approach.

We replaced exhaustive test cases with short, product-focused checks tied to user intent. Instead of listing dozens of permission combinations, we asked a few questions about the fundamentals of the change. These questions were faster to validate and easier for the whole team to reason about. such as:

  • Can a new user successfully complete a permissions-dependent action (for example, accessing the data or feature they were granted) without guidance?
  • What happens if the action fails halfway through?
  • Is this behaviour consistent with what we have promised users?

These checks lived close to the work: in tickets, pull requests, and conversations.

That release made one thing clear to us: testing and documenting everything we could think of is not the same as protecting end users.

Fewer tests. Better decisions. Fewer surprises.

Sharing testing responsibility is a practice, not just a slogan

Like many teams, we tried saying “everyone owns quality.” It didn’t work. Testing still happened late, just with more guilt attached.

What actually helped wasn’t asking people to test more. It was making testing activities part of everyone’s normal work, instead of something owned by one role at the end.

  • Developers ran quick functional and reliability checks as they built changes, rather than waiting for formal handoff.
  • Testers joined early story discussions to surface risky assumptions and unclear user behaviour.
  • Product reviewed acceptance criteria with explicit attention to failure states, accessibility, and performance expectations.

Quality improved not because people became better testers overnight, but because risk was discussed earlier and shared across the team.

Speaking the same language about risk team-wide

What helped testers the most wasn’t a new process. It was a shared way to talk about risk.

We regularly asked:

  • What is the user impact if this goes wrong?
  • What is the cost of failure: trust, support load, revenue?
  • How easy is this to fix after release?
  • How much delivery pressure are we under right now?

This gave testers a product-backed way to say “this matters” or “this can wait” without sounding defensive or arbitrary. It turned testing into a decision-making role, not a gatekeeping one.

To wrap up: what I wish I had learned earlier

If you are testing in a small team, your job isn’t to eliminate risk. It’s to make risk intentional, visible, and aligned with users.

Good testing in small teams looks like:

  • Fewer tests, chosen deliberately
  • More conversations, fewer handoffs
  • Broader thinking about quality beyond bugs

From a product perspective, testing isn’t about perfection. It’s about protecting trust while the business keeps moving. And that’s something even the smallest teams can do well.

What do YOU think?

Got comments or thoughts? Share them in the comments box below. If you like, use the ideas below as starting points for reflection and discussion.

Questions to discuss

  • When you have worked under tight time constraints, how did you decide what not to test?
  • What signals do you use to decide whether a risk is acceptable or needs attention now?
  • Where has test coverage given you a false sense of confidence?

Actions to take

  • Pick one recent release and review it through a user-impact lens. What failures would have broken trust?
  • In your next planning or review, replace one “test everything” discussion with a conversation about risk, reversibility, and user impact.
  • Share this article with someone on your team and ask: what would we do differently if time were even tighter?

For more information

Ishalli Garg
Product Lead

I enjoy exploring how people solve problems, how quality shows up in everyday habits, and how teams build better experiences for users.

Comments
Sign in to comment
Explore MoT
RiskStorming: Artificial Intelligence image
RiskStorming; Artificial Intelligence is a strategy tool that helps your team to not only identify high value risks, but also set up a plan on how to deal
MoT Software Testing Essentials Certificate image
Boost your career in software testing with the MoT Software Testing Essentials Certificate. Learn essential skills, from basic testing techniques to advanced risk analysis, crafted by industry experts.
This Week in Quality image
Debrief the week in Quality via a community radio show hosted by Simon Tomes and members of the community
Subscribe to our newsletter
We'll keep you up to date on all the testing trends.