Reading:
Community Stories: To Shift Left, Start Right
Share:

Community Stories: To Shift Left, Start Right

Read "To Shift Left, Start Right" from our Community Stories

By Sandy Kwong

Like many others in the software testing profession, I was introduced to the idea of shift left testing/quality assistance in recent years. And by “introduced” I mean our upper management handed us these terms along with links to a few relevant articles, and told the QA team that developers will start owning quality. Our job at this time, then, was to “make it happen.”

From my years spent in tech, I’ve never had a more ambiguous mission. I tried in desperation to look up more resources online, and I literally only came across the same four articles already provided by our management. Everything I read about this movement, though, sounded fine and dandy. I completely understood and embraced the concept. I mean, would I appreciate being looped in early on from, say, product kickoff? Of course, sure beats being told at 9 am “we want to release this big hairy feature this afternoon” and attempt to whip out a non-existent test plan on a Google doc that really should have been put together last week.

Without any resources available on shift left, our entire team was left with the question of “okay, but...how?”

Fast forward to November 2019. I had the pleasure of attending my first ever Test Bash conference in San Francisco. I spoke with a lot of QA professionals over a span of two days. We laughed, we shared war stories, we sympathized with each other. It was comforting to know that other professionals struggle with the same things I struggle with.

It was through a few of these conversations I had discovered that, oh wow, my team had actually made quite a bit of progress on shifting left. So, here’s one very practical piece of advice:

Request that developers close their own tickets, and comment on how they verified their changes.

BOOM! You’re welcome. Thanks for reading.

Okay but seriously, by doing this one step located all the way to the right side of the equation, quality will start shifting left. Let me explain.

Scare Tactic

For seasoned testers, we take pride in sniffing out all the nooks and crannies around any given ticket. To an untrained mind, one acceptance criteria on a ticket means more or less exactly one test case. But to testers, we can name about four or five use cases off the top of our heads within sixty seconds. Give us ten minutes and we’ll have a laundry list of considerations attached to the use cases.

And you know what? Developers know this about us! They’ve known and relied on our testing instincts, and that’s why they’re so comfortable with letting us discover issues for them to fix later. Because someone will catch their flaws. We act as a safety net.

Take away that safety net, and they will live in fear.

Now they’re faced with a choice: they can either stay in fear, or conquer that fear.

As developers also happen to be highly intelligent people, more likely than not they will choose to conquer that fear (as opposed to losing their jobs, I guess). They will start forming a logical path to achieve this.

Throughout time, here’s what the logical path our developers have come to form (note the path moves from right to left of a development cycle):

Devs closing out their own tickets

This simple request of having developers close out their own tickets triggered a domino effect. In traditional waterfall development, testers are responsible for verifying the dev’s work is done to specs and are ultimately responsible for closing out tickets as “verified.” Testers are the ones giving their stamps of approval. Whether you realize it or not, this is actually a huge responsibility because if anything goes wrong after deploy, the first question that gets asked is always “how did we not catch this during testing?” And all eyes are on QA.

If my memory serves me right, the first two months or so of devs doing this went by just fine from the surface. The directions given to me at the time was to largely leave them to struggle on their own. I made sure to tell them that I was always available for assistance, an offer that no one took seriously. So I let them be, and I observed.

One of the most notable things I observed was that developers would just close their tickets with no notes taken down as to how their changes were verified. This started becoming a problem shortly after, as when bugs started materializing and we would go back to the original story ticket to see who was responsible for verification. When asked how they went about verifying, the general responses from the devs were some 15 seconds of deep thoughts followed by “I can’t remember.”

It became obvious by this point that not only are developers to close out the tickets, but also provide details on how they verified their tickets. And so it became a requirement that developers are to provide any screenshots and logs available as proof that tickets are, in fact, verified.

It was around this time frame when I came up with the initiative to set up regular in-sprint syncs with each individual developer to discuss testing needs relating to the tickets the team is working on. The purpose of these syncs are three-fold:

  1. By talking through their work, the devs themselves started seeing the use cases to consider. I could also surface any edge cases and weird dependencies associated to their work (ie. how does this feature work with this other feature?).
  2. Talking through their work also highlighted the fact that the tickets were maybe not as well-groomed as we’d like for them to be, as our discussions would surface unclear expectations and unanswered questions.
  3. Since I was supposed to be hands-off on testing, these syncs still allowed me to have a clear understanding of various features being built.

Over time, I could see the effect of these syncs bleeding into our sprint grooming and sprint planning sessions. Questions about how to test each ticket, asking for more detailed acceptance criteria, factoring in time for integration testing with external teams (which is always a huge time suck) no longer came from just me. Anticipating that I’ll be asking for such information during our syncs have “conditioned” the devs to just ask for these in advance. Music to my ears.

Other Considerations

There were two additional changes to our team’s workflow which also enhanced our shift left initiatives: a requirement for all P1 test cases to be automated, and an improved Jira workflow.

Automating all P1 test cases was a no-brainer for our world now as devs owned testing because no one likes manual regression testing. Yes, I said it. Manual regression testing is the worst.

I also remember that as a team, we complained a lot about our old Jira workflow. Silly questions like “if I find a ticket in the Done column does it just mean the ticket has been verified on stage or if it’s been deployed to production?” Or “how can I tell if changes are deployed to stage and are ready to be tested?” Then on one not so special day, our team finally sat down in a conference room and came up with an ideal workflow. Since implementing this new workflow, we haven’t had to deal with questions regarding the state of anyone’s tickets, and it is glorious.

Final Thought

These new processes went over so well in our team, that other teams on our floor caught wind of them and started implementing some of them as well. The entire shift took place over about a year’s time, and it was definitely worth it. I have kissed manual regression testing goodbye, and haven’t looked back since. My working relationship with my dev team is also pretty great, in fact, we occasionally grab lunch together like we’re friends or something.

Now that we’ve more or less conquered our shift left mission, perhaps it’s time for another challenge. Like a staring contest.

Author Bio

Sandy Kwong has been with her current company for the past decade, having spent the last 4 years in QA. Her past experience in customer service with the company allowed her to gain insights on general customer pain points, and has given her an edge on how to best approach use cases during test planning (ie. “What will the customers most likely be complaining about on social media?” is always a solid starting point). In addition to advocating for quality, she’s also a big advocate for having fun at work: she is part of a “Ministry of Fun” group in charge of organizing social events and team outings, as well as being involved in an a capella group with her overwhelmingly mediocre soprano voice.

This open-source tool is the #1 Automation Test Reporting Tool loved by the community and the developing team plans to share their knowledge via a learning course. Stay tuned!
Explore MoT
TestBash Brighton 2024
Thu, 12 Sep 2024, 9:00 AM
We’re shaking things up and bringing TestBash back to Brighton on September 12th and 13th, 2024.
MoT Foundation Certificate in Test Automation
Unlock the essential skills to transition into Test Automation through interactive, community-driven learning, backed by industry expertise