A Must For Mobile Testing: Device Farms

A Must For Mobile Testing: Device Farms

Eduardo Fischer dos Santos shares his thoughts on why Device farms are a must for Mobile Testing

When I started my career in testing, life was simple. I tested a web application in only two browsers. I could expect that the vast majority of our clients would use Windows. I needed to account for only two kinds of input: mouse and keyboard. I’m not saying that testing web apps is an easy job; I certainly don’t think that. But when I moved into mobile app testing, suddenly there were many new variables I had to take into account that I had not foreseen.

Every week I would test the app to exhaustion. We had manual tests, unit tests, static analysis, E2E, and somehow every week we always received a message on our Slack channel informing us that some users were having issues. Sometimes it was performance, like a screen taking too much time to load, and sometimes the client simply reported that his app had just displayed a black screen and he could not use it. The team would try to reproduce the bug with no success — somehow it would always dodge us.

Why Is Reproducing Mobile App Bugs So Hard?

When dealing with web applications, testing is almost hardware agnostic compared to mobile applications. I know that may be a pretty bold statement but I will explain what I mean. Imagine that you booted your computer, opened your standard browser, and tried to log in into your favorite website only to be greeted with the message “Wrong password”. “Impossible,” you think, “I’m certain it is my password, it must be a bug.” Worried, you get in contact with customer support and they confirm to you that it is a bug and they are going to need some information to be able to handle it.

Now let’s change roles. Imagine you need to reproduce the issue in question. What information do you need from the customer? What would you ask if you could? Knowing the browser and operating system is certainly important, so that’s a good question to ask. But at the same time there is a lot of information you might be able to dismiss. “What mouse and keyboard are you using? Are you using a notebook or a desktop?” are some good examples.

Imagine that instead of a web app we had a mobile app. What questions would we need to ask? “Samsung or iPhone?” “What version is your OS?” “Is your battery running low?” “Have you tried using touch ID or face ID to login?” “Did you change phones recently?” “Is airplane mode on?” The number of variables related to the state of the phone itself can become overwhelming. You are always left wondering which one is related to the bug and is necessary to reproduce it.

A Dose Of Good Luck: Device Farms

Even though we have some issues reproducing and predicting where bugs may occur, it’s not as bad as rocket or airplane systems testing. In that field, we would need to predict issues caused by temperature, wind speed, intense radiation, and other variables. We can’t simply take some airplanes and run our tests on them on real vehicles; simulators are available, however. It’s still easier for those of us testing mobile apps, though. We can rent some devices from Google, Amazon, or Microsoft to build our device farms.

A device farm is basically a group of many devices (phones, computers, or even medical equipment), rented or owned, virtual or real. With these farms, we can test our software on a variety of hardware. Buying lots of smartphones for testing purposes is expensive, both in cash and in people resources necessary to set up and manage such a system. A natural alternative is to rent them as needed through cloud platforms.

With device farms, bugs that were impossible to reproduce without a specific OS version or a certain brand of phone are now reproducible even if no one in your team has in hand the specific hardware needed. All you need to do is go to your device farm platform, choose your OS version, upload your app, and start testing.

Imagine the following situation. A lot of clients with a specific brand of device are complaining that your app is not working, but no one in the tech team (or even the company) has the same brand. What’s more, the bug they are complaining about was not detected in the automated test suite. How do we reproduce this bug and how do we prevent it from happening again?

To reproduce the bug, upload your app to your favorite device farm, run an emulator or run the physical device with the same brand and OS version used by the clients. Through manual testing, try to identify when the bug occurs. After reproducing it, upload your automated test suite to the device farm and run it to see if it can catch the bug in the specific smartphone, just so you know why the tests didn’t catch it before. Make adjustments if necessary and add the specific brand to your list of devices being tested. There it is: never again will you need to run around the company trying to find people with the same smartphone and configuration as your clients.

Using Device Farms To Detect Bugs Before Deployment

I explained above how we can use device farms to reproduce bugs. However, we don’t want to simply reproduce production bugs. We want to avoid moving bugs into production in the first place. Of course, running a suite of manual tests on one device already consumes a certain amount of time, so executing the whole suite for every device is not feasible. So we need a different solution, which is of course to execute automated tests on several devices, preferably at once.

Of course, it seems like running our continuous integration / development tests on only one device is already far from a trivial task. On many devices, the task is a behemoth. Well, there are of course a lot of new and different issues you are going to have to deal with compared to single-device testing. 

Tests that were tried and true in the past may become less reliable, since you are going to be running your whole suite on many devices with different configurations. You will need to handle users and tests more carefully. In a single-device test, you could have one user do a login and execute a task. But with multiple devices you are likely to have problems if you have the same user log in on all of them. So you will need to use different users for every running device. 

What’s more, you will need to handle the state of the system more carefully. Let’s say you are making an app to buy movie tickets. If many test devices try to buy the same seats, the tests are going to fail. But if you handle all these things correctly, you will have an incredibly robust testing apparatus.


Device farms have solved big challenges in mobile testing to the point where they have become crucial to a successful mobile testing strategy. Without them, our only alternative is to test on only one device at a time, which means we can only “guarantee” quality for one device. Even though implementing and adjusting our pipelines and test suites to handle multiple devices can be challenging, convincing our entire user base to use only one brand of smartphone is impossible.

The technical aspect of instrumenting tests to run in device farms can be a little challenging but certainly there will be good returns to implementing this type of solution into your pipeline, regardless of the vendor that you choose.

For More Information…

How To Grow Your Mobile Application Testing Skills, Maxim Zheleznyy

Dealing with Device Fragmentation in Mobile Games Testing, Ru Cindrea

AWS Device Farm

Google's Firebase Test Lab

Kobiton Test Platform

About The Author

Eduardo Fischer is a quality assurance engineer who focuses on mobile test automation and is currently working for an international stockbroker. He discovered his passion for testing during his time as a back-end developer and eventually made testing his fulltime job. You can find him on LinkedIn:

Eduardo Fischer dos Santos's profile
Eduardo Fischer dos Santos

Quality Assurance Engineer

Is a quality assurance engineer who focuses on mobile test automation and is currently working for an international stockbroker. He discovered his passion for testing during his time as a back-end developer and eventually made testing his full-time job.

Share this article: