Reading:
One Week to Develop, Test and Release a New Website
Share:

One Week to Develop, Test and Release a New Website

What could be at risk when you only have a week to get a website tested?

Building a website with static content should be a trivial task; there are many tutorials which suggest you can do it in less than an hour. However, within a company that has a number of interested parties, one of those being the CEO, it's not as simple as it sounds. Throw in scope creep, responsive design and providing 24/7 support and the not so simple task just became even more so. Sharing my experience of working in a fast paced, high pressure development environment, I aim to highlight what heuristics you can use when having to deal with tight deadlines and how quality aspects are perceived by business owners.

When you are delivering a website for a company, there are ordinarily some implicit requirements that will need to be built in. To list a couple: 

  • Support processes to limit downtime, alerts and observability are a must - step up the QA Lead who connects the alerts to their phone. 
  • Rendering performance needs to be respectable - this would negatively impact the status of your website in terms of SEO.

Spoiler alert, we managed to deliver the website on Friday at 5:30pm but it did come with quite a few sacrifices from a quality perspective and a very anxious weekend waiting for any alerts to ping on my phone.

 

Day 1 - It’s Only Static Content They Said

Here’s how it all started; I got a call on Monday afternoon from the Product Manager. They explained to me that they have put together a team to deliver a website. It’s a simple website used for investor demos and helping to generate funding for the startup. They then casually mention that the deadline is the end of the week. Safe to say, my reaction involved a few swear words. He aims to ease my anxiety by stating, and I quote, “It's only static content, so doesn't require much testing”… So what, exactly, is static content?

“Static content is any content that can be delivered to an end user without having to be generated, modified, or processed. The server delivers the same file to each user, making static content one of the simplest and most efficient content types to transmit over the Internet.” — StackPath

Now my own personal definition of static content extends beyond just rendering the same content to users. I would also add static images and any dynamic elements (usually involving javascript) to this definition of static content. The content also would not be the same for each user, as there are many variables, including OS, Browser, and Device, which would impact the end user experience.

They continued by saying someone from the team would be in contact on Wednesday (day 3 out of 5) to discuss testing requirements. Immediately after the call, I started writing some questions down to proactively reach out to the team lead. They looked something like this:

  1. Why?
  2. Why now?
  3. Why me?

I didn’t really write those down; they were just in my head. I drafted some real questions after a little time to digest the information:

  1. Where will the site be hosted?
    1. What does the business actually care about? 
    2. Where can I set up their alerts, to catch the bugs I will miss
  2. Where are the designs and content?
  3. Does the site need to be responsive?
  4. When will I get a test environment?

All of the above would help me focus my testing where it’s needed most. 

Safe to say I didn’t wait until day 3 to raise my questions.

 

Day 2 - Creating a Test Plan Without Designs

By day 2, the questions above had been answered. I was still very anxious about all the possible unknowns. One of which was the designs for the site. I hadn’t seen any designs at all, and without any designs, I had no visual idea of what I was working with. The idea was to be "agile" about it, design the homepage, develop the homepage, and test the homepage. This sounded great in theory; I was excited to work in such a fast moving, collaborative development process. Until I received the designs of the homepage (designs - yes you read that correctly, meaning more than one), the designs were responsive depending on desktop user or mobile. Relating back to the definition of static content, does responsive design change how the content is delivered to the user, is it now modified for the users depending on whether they arrive using a mobile or desktop? Also, the designs included images, videos and sliders (or carousels, whatever you want to call them) and so the complexity of testing went from what I imagined to be images and text with no user interactions to being far more, which immediately made the scope go well beyond the pre-allocated two days of testing (not including bug fixes).


Before this, I was thinking about not spending time writing a test plan and just creating it retrospectively. However, at this point, I immediately thought I wanted something to document what was in scope and out of scope, so it would be clear that I never intended to dedicate time to testing dynamic elements and the risk(s) associated with that. I needed the test plan to be concise and easy to digest, which reminded me of The One Page Test Plan I had seen on the Ministry of Testing club.

Putting the test plan on a wiki page means it can be versioned and changes tracked, and easily shared. Which was also important considering the constantly changing and moving requirements I was working with.

 

Day 3 - Waiting Anxiously for a Test Environment

Eagerly waiting for a test environment. Nothing happened from my perspective on this day, just frantically planning and arranging the troops for the testing ahead.

 

Day 4 - Starting Testing: 1 Day Before Release!

A single tester for the website in 2 working days was simply not enough time. I created some sort of a plan: 

  • Core user journeys mapped out, by speaking to product owners, the recruitment team and marketing
  • Specific areas to focus on, based on discussions with business owners around the risk and impact on the user

Fortunately, I have years of experience with heuristics to fall back on, which helped me again focus on areas and target the more complex features. 

In order to help with testing, I had primed other members of the team and prepared a form of  User Acceptance Testing. However, another curve ball, once the environment was made available and the homepage appeared on my screen, I realised the designs within Figma, which contained comments stating the specific requirements, had not reflected the complexity of the website for testing. For example:

  • The header banner was a video background, not just a static image. Which had to autoplay and be full width for all devices
  • The awards section automatically scrolled and should scroll infinitely
  • The “about us” image should zoom in on hover.

None of these elements are majorly complex but remember the assumption I made about the original requirements of “static” images and “static” elements. The reason I raised concerns about the complexity of these elements is that it’s fine to test these things on my machine, but the scope of testing these across browsers and devices is nigh on impossible within two days, even with a team of people to help. Therefore I updated the test plan to include these elements as high risk and requested that if any bugs were raised around these elements, they would be descoped and replaced with static elements. Imagine how that was received by business owners; by the sincere stress and anxiety they could see me in, they agreed to accept the risk.

 

Day 5 - Involving the Whole Team in Testing

A few people got involved in testing that I ordinarily wouldn’t have expected to get involved in a usual delivery of a static website. Cross browser testing was performed by the product owner and DevOps engineer, who had a MacBook and an iPhone. Windows desktop testing was performed by the Head of Product, who also checked the copy, and that the right images were in the right place. I also reached out to the rest of the company to perform some form of alpha or crowd testing and execute ad-hoc testing. Though in retrospect, this can cause more work considering the number of minor bugs that were raised, as well as usability improvements which would definitely not be fixed now and probably will never get fixed. Ideally, you would have time to create structured instructions on how to test, where to test and explain how to raise bugs.

The website wasn’t ready, and there were many bugs and inconsistencies with the designs, but it was “signed off” by stakeholders. Not that “sign off” is a tester's responsibility, but my reputation and pride were at stake, so it wasn’t a decision I was happy about but willing to accept, knowing I had raised my concerns loudly, highlighting all the risks with this 5-day delivery approach.

Sacrificing quality is hard to accept

As mentioned, I had to sacrifice quality. I had focused my testing on parts of the system that were going to be shared and visible at investor demos. Concentrating on above the fold sections to make first impressions as positive as possible. Though this meant that many areas of website testing that I would usually perform were sacrificed.

While exploring the system by taking journeys through as the CEO and as an investor, I had to forget about performance and usability issues. However, a quick scan using Google Lighthouse gave a brief snapshot of the current state, and I took notes on usability but didn't raise them as bugs for now. However, I shared the notes openly on Slack and raised Jira tickets to make sure I could come back and raise them post release. The designer agreed about usability, and the tech lead agreed the performance was poor. These are the kind of quality issues I was willing to sacrifice but with the caveat, we would come back and address them once the immediate deadline was completed.

However, I wasn't willing to sacrifice all quality aspects because of retaining professionalism within the healthcare sector and being able to quickly react to live issues: 

  1. Accessibility (automated checks only)
  2. Monitoring errors and logs with Azure

 

Lessons Learned From Delivering a Website Too Quickly

Don’t assume anything. I fell into this trap many times within just one week! Fool me once… the assumptions I made around static content meant I was not as well prepared and had not raised the risks to the business at the earliest stage. Also, you will inevitably have to sacrifice quality with these tight deadlines, I knew this was going to be the case, but I didn’t quite realise how much I would have to sacrifice. Logging and monitoring are powerful tools in this scenario, as it provides clear data on risk areas. Also, it meant we could move quickly post release and focus our attention on the bugs which were occurring most often in production. 

I would never want to go through this experience again as it made me feel very anxious about the worth of testing in the eyes of the business and feel embarrassed about the quality of the website I had been involved with. Even though the website was released “on time”, there were many bugs, many design inconsistencies and many performance issues. What I learnt from this experience was the business had much lower expectations than me around quality.

Resources:

  1. Static content — StackPath
  2. Usability — NN Group
  3. Web Performance — Mozilla 
  4. The One Page Test — Claire Reckless
Lewis Prescott's profile
Lewis Prescott

QA Lead

I'm an experienced QA Lead at Cera Care (one of Europe’s fastest-growing companies), having worked across different industries including Healthcare, Non-profit, Retail and PropTech. I am also a course author on Test Automation University & Udemy, sharing my knowledge is a passion of mine.



A Balancing Act: Finding A Place For Exploratory Testing
Part Of The Pipeline - Ash Winter
To Test a Component - Testing UI Components in 2023
Would Heu-risk It? - Live Demo
Test All the Things with the Periodic Table of Testing - Ady Stokes
60 Powerful Heuristics to Bust a Testing Groove With By Simon Knight
Explore MoT
TestBash Brighton 2024
Thu, 12 Sep 2024, 9:00 AM
We’re shaking things up and bringing TestBash back to Brighton on September 12th and 13th, 2024.
Automating API Checks With RestSharp
Learn everything you need to start automating API checks using RestSharp