Reading:
Where Am I And Where Is My Test Data? Enhancing Testability Of Location Services
Share:

Where Am I And Where Is My Test Data? Enhancing Testability Of Location Services

Uncover the challenges and strategies for testing mobile applications that use location services

“Mobile applications can be among the most difficult to test because of their reliance on background processes like location services. Enhancing testability is vital to the testing effort. Thanks to the Controllability, Observability, Decomposability, and Simplicity (CODS) model, we have a great framework to guide us on the journey to optimal testability.”

Location Services: Critical To Mobile Functionality, Tricky For Testing

Recently, I’ve been working on a mobile application that has provided a big testing challenge. The application in question works in the background, using location tracking services. The end user frequently keeps the device in their pocket while they go about their business. Our application generates insights for them based on locations they visit. 

Location tracking-based applications can be time-consuming to test, as you need to move around just as the end user will do. You can't rely entirely on mocks and location spoofing, since you need to test in the physical world too. Plus there are many different configurations to take into account — many devices have WiFi-assisted location tracking, for example. 

Testing monster trying to catch mobile phones with wings using a net

In short, to test the application well, we needed better testability. 

Improving Testability: The Controllability, Observability, Decomposability and Simplicity (CODS) Model 

When it comes to testability it's important to have a model in mind. So my friend Rob Meaney created the Controllability, Observability, Decomposability, and Simplicity (CODS) model.

  • Controllability is the ability to control the system so you can reproduce each important state.
  • Observability is the ability to observe everything important in the system.
  • Decomposability is the ability to analyze the system as a series of independently testable components.
  • Simplicity is how easy the system is to understand.

Suggest to your team that you start using this model before a single line of product code is written. It is far easier to build in testability at that stage. 

I’ll break down some of the improvements we made to our application as we built it. In our story, we started with decomposability, because being able to test each component independently means getting feedback as early as possible.

Decomposability In Practice: Choosing Third-Party Libraries And Building Custom Interfaces

Since we were building a mobile app, we needed to decide how to interact with device location tracking. So, to avoid reinventing the wheel, we evaluated several third-party libraries that provide tracking services. Using third-party libraries can save the team a lot of time and effort. However, you need to choose carefully. 

Based on my team’s experience, I suggest:

  • Thorough research. Popularity isn’t everything, but with third-party libraries it can be telling. To begin with, you can look for large numbers of users and star ratings. What’s more, the library should be actively maintained, which is often reflected in recent issue fixes and robust unit tests. It’s worth joining the online forum for the library if one exists. And remember, especially if it's open source, be kind in your interactions.
  • Build your own Interfaces. This is where decomposability comes in. With custom interfaces, you can add high-quality logging. And a well-instrumented interface will help you uncover bugs in the third-party library’s communication with your business logic. 

If you as a tester can get involved in decisions on third-party library usage and interface design, your testing journey will be far easier.

Controllability In Practice: Setting Application States

You can imagine how much variety there is when using a mobile application that depends on location tracking. Driving instead of walking, not moving for a while and losing the positional signal or going through a tunnel are just a few examples. There are lots of `that's weird` moments. It’s a boon for your testing to start to gather feedback from your device. Yet, this only goes so far, especially when diagnosing bugs, because the infinite variety of possible locations makes it much harder to reproduce what you find. Reporting bugs that are hard to reproduce doesn’t help your relationship with your developers. 

To test code thoroughly you want to be able to set the application to its most important states. This requires controllability. You need tools to assist you, and in the realm of location services, GPS Exchange Format (GPX) files containing stops and routes to define a journey are the name of the game. 

We needed to:

  • Draw routes on a map so we could simulate a device in motion.
  • Simulate stops of varying durations at different locations.
  • Export those routes and stops to GPX files to include in bug reports.
  • Import and explore GPX files generated on the device during travel.

We used three classes of tools to augment our testing: 

With this toolset, we could easily share information about bugs and exploratory testing. Data generated by physical devices was key, and we could then recreate specific scenarios using targeted GPX files.

Observability In Practice: Gathering Information About Location Services

Controllability and observability are two sides of the same coin. You need to be able to see if you have set the application into its most important states. Also, you need to be able to see what happens when you are moving between states. 

These days, observability is a hot topic. You’ll also find a lot of tools of varying cost and complexity. I urge you to do some research before you invest in any one tool. 

We implemented the following patterns: 

  • Classify your important events. Mobile devices generate many log events. And location tracking libraries generate a ton of information in their debug modes. It is eminently possible to lose important information because of the sheer volume. 

To counter this, enumerate all your important events, giving them unique IDs and human-readable names:
{
NotSet = 0
NotInitialised = 10000,
Initialised = 10001,
NotAuthenticated = 20001,
Authenticated = 20002
}

For our team, reviewing this log data was also a great group exercise for deciding where we truly needed logging.

  • Choose a centralized logging tool. Since you will be on the move often, you will need to save log events from your device. All devices should log to the same tool. Finding patterns within logs from many devices helps to contextualize problems you find. If a problem appears to be common across devices, it probably merits further investigation. We used Bugfender, but there are many others available.
  • Add a hidden development menu. As well as exporting logs to a centralized location, being able to see what is happening on the device itself is important. For this purpose, we added a special submenu, well hidden in the app’s “About” screens. It enabled us to trigger insights from the app while on the move. To start with, we added the ability to list the locations that we had been to and how long we had spent at each stop. As the testing evolved, we added more to this menu to help us.

Simplicity In Practice: Optimizing Test Automation For Location Services 

Mobile app test automation is difficult, and even more so if you are running it within a pipeline. The apps need to be running in the background, too. And if you’re using emulators for any other testing, don’t rely on the location data they provide. Keep it simple; focus on unit and component tests, as they are much more reliable. Add end to end tests sparingly.

Focus on three areas:

  • Initialize your location tracking library correctly. Usually, you need authentication, device permissions, location accuracy settings and current location. Getting this right will make all later testing much smoother.
  • Monitor changes to the contract between your interface and the location tracking library. Detecting changes in that contract is expensive later in your testing.
  • Verify that the interface handles business logic correctly.

Keeping these aspects in mind as you build out test automation will minimize false starts. Considering the lengthy build times for mobile apps, saving testing time is of the essence.

To Wrap Up…

We found countless interesting problems during testing. One of my favourite silent-but-deadly bugs had to do with devices automatically protecting battery use by limiting location tracking. 

Mobile applications can be among the most difficult to test because of their reliance on background processes like location services. Enhancing testability is vital to the testing effort. Thanks to the CODS model, we have a great framework to guide us on the journey to optimal testability.

Give the CODS model a try yourself and let us know in the comments how it went!

For More Information

Ash Winter's profile
Ash Winter

Tester & Co-Author

Ash Winter is a consulting tester and conference speaker, working as an independent consultant providing testing, performance engineering, and automation of both build and test. He has been a team member delivering mobile apps and web services for start ups and a leader of teams and change for testing consultancies and their clients. He spends most of his time helping teams think about testing problems, asking questions and coaching when invited.



Comments
Experience Reports - Teams Challenge
Approach to Comparing Tools with Matthew Churcher
Ask mabl Anything!
Mobile Test Management Done Right
Getting Started With Mobile Accessibility Testing - Ady Stokes
Test Heuristics Cheat Sheet
Discover Data Science Testing - Laveena Ramchandani
99 Second Talk - Kim Knup - Distorting a message by fitting it into a visual model.
Exploratory Testing: LIVE - Adam Howard
With a combination of SAST, SCA, and QA, we help developers identify vulnerabilities in applications and remediate them rapidly. Get your free trial today!
Explore MoT
TestBash Brighton 2024
Thu, 12 Sep 2024, 9:00 AM
We’re shaking things up and bringing TestBash back to Brighton on September 12th and 13th, 2024.
MoT Advanced Certificate in Test Automation
Ascend to leadership roles by mastering strategic skills in automation strategy creation, planning and execution