Reading:
Me, Myself and UI - My experiences with UI testing at HM Land Registry
Share:

Me, Myself and UI - My experiences with UI testing at HM Land Registry

Me, Myself and UI - My experiences with UI testing at HM Land Registry. Reduce test flakiness, make your CI Pipeline more robust and test suites more efficient.

Ā 

Disclaimer

The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of HM Land Registry or the wider UK Civil service.

Ā 

Some Background

Iā€™ve worked with HM Land Registry (HMLR) for over five years. In that time I worked on the Local Land Charges(LLC) programme[7]. It included four delivery(scrum) teams from both HMLR and external providers. We worked to migrate and standardise information from local authorities into one central register.

Iā€™m still involved with the programme but not embedded in the team anymore. I'll mostly cover my learning from 2 years embedded in those teams, and how Iā€™ve applied it to other teams.Ā 

We worked to deliver two key services:

  • A citizen service to buy information
  • A professional service to maintain the information

I'll explore our challenges, thinking, and lessons learned. How the approaches to many of the problems boil down to research and experimentation. We looked at design patterns, and what others had already tried. Lines of thought that, having looked back now, are hardly groundbreaking. I hope by sharing this story, you can see that change and improvement doesnā€™t need to wait for a new project. It is worth trying to improve all the time.Ā 

Like any project, the timeline of events isnā€™t linear. Many of the things we tried happened around the same time.

Ā 

CI Builds Are FailingĀ Ā 

You guessed it, Iā€™m talking about flakey tests. Like many other teams, we got to the point where some tests seemed to fail at random. When this became the norm rather than the exception we knew something had to change. We had success fixing these tests but it was short lived. A small group was analysing and fixing these alongside other work.Ā 

We were fixing tests after they were written and merged into our test suite. Not preventing them from being written in the first place. We reacted to symptoms but didnā€™t address the root cause. This was time consuming, and didnā€™t prevent flakey tests.Ā 

What it did do, however, was force us to look at the tests and architecture in more detail. We based our test suite on the test skeleton[1]. We could create tests but lacked structure and consistency on how we should do it. More on that a bit later.Ā 

When I looked at our test estate I saw that many could be improved. Some tests were perhaps appropriate at some point in the past, but not anymore. Tests had become bloated with many assertions, and so had no clear purpose. Other tests used some ā€œinterestingā€ approaches to data management and other quirks. The suite showed us the debt weā€™d built up over the months of patching the tests.Ā 

As an example, the suite created users at startup, which most tests used at some point. This reused existing API tests to create users for the UI tests. This meant we had introduced dependencies between tests. This was something I implemented early on as a simple way to manage test users. Over time as people added more tests, they added more users to this setup. That approach wasnā€™t appropriate or sustainable anymore. I implemented it as a short term solution but we never revisited it, and as such, it became technical debt.

Overall the standard of our testing was high. Iā€™m proud of what we did. I learned more about automation and code design working on that project than I had anywhere else. I learned by making mistakes, dealing with the consequences, and getting better. Itā€™s fair to say a large proportion (most) of those interesting choices were mine. Like anything, they were choices that made sense at the time but not so much as time went on and we learned more.

Until we looked at our approach, we couldn't see these types of issues. We needed to rethink. How could we remove flakey tests, when we were actually introducing them all the time? Question after question surfaced and we needed to tackle them.Ā 

In general, we concluded:

  • We had lots of UI tests

    • What are they doing for us?

    • Were they all relevant?

  • Most of our tests were not atomic

    • Too much being done in a single test

  • Tests were not independent

Thinking back to conversations with my colleague Paul. We often talked about how we could apply good engineering practices. What is the next thing we should do? How do we create worthwhile automation? Design patterns, Object Oriented Design (OOD), and standards were often the topics we debated.Ā 
Ā 

Why Do We Have This Test?

Asking ā€œbut why?ā€, turns out to be a pretty effective approach to most things. Itā€™s also pretty annoying to deal with, so earns bonus points. Faced with lots of tests, we needed a way to categorise them. If the test was checking lots of things we immediately flagged it for further review. Tests written earlier in the project tended to cover low level actions. These tests generally didnā€™t have much value anymore as we tested the key functionality elsewhere. So in the binĀ they went! After some due diligence of course.

Ā 

Content Checks Rant

Speaking of things that are low value and belong in the bin ...

Page content only checks are garbage, change my mind.

This is anecdotal. Your context and mileage may vary. I removed tests that were only checking for content. I did this with extreme prejudice because in my view they brought us no value. They took up execution, maintenance, and development time. Generally, content defects were of low priority. This should be reason enough to not have them at this level.

Consider this simple scenario. An automated test passes if some text is present on the screen. This test doesnā€™t explicitly verify that text is where it should be on the screen. Or that it is the correct font. If this test passes, does it tell us much? In my view no it doesnā€™t. Weā€™d need to add lots more checking to the test to get more than simple verification. This scenario can be tested in different ways that may be more appropriate. This could be by a human or using other automated checks at different test levels.

A manual check was often more effective than automated tests bloating the suite. Which supported my question of why these tests existed at all. The answer being, we created the tests without thinking enough about their value. I banned these tests from our suite. We rejected merge requests unless there was justification. It did keep those tests out because nobody could articulate why they were valuable to us.

Nowadays Visual testing[8] is far more sophisticated. Offering more than pixel comparisons using AI and a host of advanced features. Tools offer much smarter checking, aiming to simulate how a human would check i.e. not just the HTML but also how things appear. If we had access to that tooling this rant would be redundant. The tests never would have existed. Something I fully intend to explore in future.

Ā 

Refactor And Delete Tests

I didn't include that rant for clickbait. The experience of removing tests, and saying no to new tests, gave me the confidence to delete more tests. That might sound odd to some, and feels odd to me now. Why wouldnā€™t you remove things that arenā€™t relevant? As things change, so should our tests. Code gets refactored and deleted all the time, and the same should be for our tests. Iā€™m not suggesting we delete tests without thought. We should consider whatā€™s important, and what tests are valuable.

In my experience, itā€™s a problem of perception. Once we wrote a test it had implied value. It implied it covers some risk. There was a fear that if we donā€™t run all the tests, then the ones we left out would have caught issues. We clung to a ā€œrun all the testsā€ mentality. Rather than a ā€œrun relevant testsā€ mentality.Ā 

To move forward, we ensured our key flows were tested quickly and removed many of the larger regression tests. For example, we kept the ā€œbuy some dataā€ and ā€œupdate the dataā€ flows, and removed tests that checked the form error messages. In other words, we focussed more on acceptance flows, rather than system level tests.

Ā 

CICD And Running TestsĀ 

We had a good understanding of what tests were important for CI and, what tests were good for larger regression runs. I wanted to reduce the number of tests run in our CICD pipeline, so we could get quicker feedback. I identified areas of lesser risk or infrequent change.

For example, our account management features hadnā€™t changed in about 3 months. We still ran a robust set of tests against this feature, even though we werenā€™t making changes in that area. Did we need to run all our tests against this feature? Of course not. Did we need to run any tests against the feature? That is a more interesting question. I would say we didnā€™t. But, to manage risk and peopleā€™s expectations, I chose to run a few key tests. A decent compromise in my book.

We were being more selective with what tests needed to run ā€œall the timeā€ in CI. We could change this if needed.

Once again Iā€™m not suggesting we exclude tests on a whim, but rather to think critically about the value of those tests.

Ā 

What Is The Point Of This Suite?

Expanding our previous line of questioning. If we want to know why we have a test, surely we should ask why we have a whole suite of tests. Itā€™s easy to get swept up in writing and maintaining tests. We forget to do the testing and use the most appropriate tools for the job.

At this point, weā€™ve removed lots of tests that didnā€™t belong in our suite. But we didnā€™t have a shared understanding of what should be in there to begin with. Hardly surprising that we ended up here. How could we get the right test at the right time?Ā 

We worked out our goals for the suite by asking more questions. Our primary goal was to test the key flows of our services. Maintaining detailed regression tests was a secondary concern. Getting that quick feedback in CI was generally preferable to long test runs, where most tests werenā€™t relevant.Ā 

We still needed something to help us keep thinking about the goal. We worked out that we could remind ourselves in many places to think about our tests. We incorporated this into our peer reviews, analysis, and our story mind maps. Nothing prescriptive, but prompts to stop and think about things.

Ā 

Test Levels

As we developed what the goal of the suite was, we identified tests that didnā€™t sit well at the UI level. Over time, we included API tests in our UI testing suite. Due to a lack of guidelines, delivery pressures, and a habit of using UI tests for anything thatā€™s not a unit test. I canā€™t pin down when this happened. I suspect it happened gradually, and once they were in the suite, it was a precedent. An example that people followed. Well why wouldnā€™t they?

Using the ā€œTest Pyramidā€ [2] as a guide we reviewed our tests (yet again). We weren't dogmatic about following the model but used it to prompt discussion. Can we test this at the API level? Is there any benefit to testing it at the UI level? Can we minimise the amount of UI steps? This all helped us to figure out at which level our tests should live.

Ā 

Independence Day (And Months)

We removed the low hanging fruit and were being a bit smarter with running tests. But we still had flakey tests. Sure the suite ran a bit faster, and we had less tests to maintain. That was positive, but not what we wanted to resolve.

Iā€™ve mentioned the user account management ā€œsolutionā€. It encouraged dependent tests. Turns out we had similar problems throughout the suite. We needed our tests to be more independent. One test shouldnā€™t impact another.

We also had lots of code that did mostly the same thing. I put this down to 4 development teams constantly adding code. It wasnā€™t always obvious where you should look for things, or where to put useful code.

Ā 

Helpers And Models

We needed to create a way to support independent tests[5]. We needed ways to manage users, charges, session information away from the UI. This is why we created data models and helpers. Helpers in our case are classes used primarily to set up and teardown state for our tests.Ā 

The model classes contained all the useful information about a data item. For example, the user model included; names, email, password. These models abstracted the details into classes. The tests could use this model when it needed a user object.

The helpers classes perform useful actions in our code base, often with models. To create a user, the create_user helper will interact with the relevant API /database.

Ā 

Flow chart to visualise the use of helpers and models, previously outlined.

Figure 1. Flow diagram of create a user helper

Ā We stored these in one place, so anyone could update or create functions if needed. Not only did we have code following better Object Oriented principles. We now had ways to set up and teardown data.

Ā 

Independence of Test(s)

In general, UI tests exercise most of the application stack, this means it can be hard to nail down issues. We used our helpers to get our UI tests into the correct state ready to perform the test. Most tests follow a simple pattern: get onto the right page, perform our action, check our result. The ā€œArrange-Act-Assert'' approach in short. Well explained by Automation Panda[6].

Ā This idea was key to our approach for more atomic UI tests. We used our helpers to arrange our users, charges, and browser sessions (so we used the UI a little as possible). Then our UI test would perform its action, and assertion.

Each test was responsible for its own arrangement and cleanup. We leveraged Cucumber hooks for this. Resulting in tests (passed or failed) having no bearing on one another. This reduced flakiness, because we were only using the UI to perform the actual test. Not setup or clear state.

Ā 

Reflections

It took us a lot of learning and a lot of work to reduce flakiness. We still encounter it today. As does every company using UI automation. While the technical learning and approach is interesting. My takeaway from the experience is we need to keep asking questions. If we ask questions, we can find better ways to do things. We can learn from what others have done, and apply that to our situation. Itā€™s also difficult to keep doing the ā€œright thingsā€, and not slip back into bad habits. Iā€™m guilty of doing the wrong thing sometimes, as is any team.

Ā 

UI Standard

A year or two later I worked with people in our test community to create guidelines on how to approach UI testing. I wonā€™t go into the detail here, but we came up with a Page Object Model [4] implementation. I added a wiki and example tests to our existing skeleton. Feel free to have a look at the repository [1]. The standard provided guidance on how we could write tests. Something we lacked before. It incorporated lots of different experiences and views from the practice. Itā€™s intended to provide support to teams, and a way for us to share experiences and better ways of doing things.Ā 

Ā 

If I Could Start Again?

Since my time on Local Land Charges Iā€™ve worked on different projects. Most recently, a new project to provide a ā€œcommon HMLR account solutionā€. The UI testing uses the new UI standard, helpers and models to create more atomic tests. This project has more manageable UI automation thanks in part to lessons learned on LLC. I appreciate itā€™s a different project with different constraints. However, having those foundations and goals in place from the beginning has made all the difference. But itā€™s never too late to improve.

Ā 

So what can you takeaway?

In short:

  • Keep trying to improve

    • Even little things help

    • People have dealt with similar problems, use their experience

    • Find what helps you

  • Ask lots of questions

    • About what you are doing

    • Why you are doing it

  • Experiment

    • Try new things

    • Sticking to ā€œwhat's always been doneā€ is expensive and a road to madness



Ā 

Author Bio

Aaron Flynn is working with delivery teams at HMLR in Plymouth, UK, where heā€™s lived the last 5 years. Originally from Dublin, heā€™s worked across the UK and Ireland. Heā€™s passionate about communities, accessibility, and technical testing to name a few things.Ā 

Heā€™s a community lead for the HMLR test community, works with other communities at HMLR, spoken at cross government meetups, and (of course) is active on MoT. He loves collaborating with people to share ideas and experiences.Ā 

You can find him on the club, twitter. He periodically writes on his blog, thanks to the MoT Bloggers club.

Ā 

Ā 

References

Explore MoT
TestBash Brighton 2024
Thu, 12 Sep 2024, 9:00 AM
Weā€™re shaking things up and bringing TestBash back to Brighton on September 12th and 13th, 2024.
MoT Foundation Certificate in Test Automation
Unlock the essential skills to transition into Test Automation through interactive, community-driven learning, backed by industry expertise