Crowdsourced Mobile Application Testing for Performance and Usability

With so many mobile device types on the market, along with the problems of testing on emulators, application developers and quality engineers are looking for new ways to test their Android and iOS applications. Here we talk to uTest's Roy Solomon about 'testing in the wild' and how crowdsourced testing is helping improve mobile application quality.

It’s Roy Solomon’s job to break the mobile application the software development team worked so hard to develop, or at the very least, put the product of the mobile developer’s time and energy through as many hoops and over as many hurdles as is humanly possible, or should we say, is possible when done at the hands of thousands of crowd-sourced users. Roy Solomon is the VP of product management and co-founder of uTest, the company that doesn’t simply test software in a lab. uTest has a community of over 70,000 real world users who test mobile applications until the handheld devices cry for mercy. Roy refers to this approach as "testing in the wild." If your app can survive, it will come out of the experience fitter than it was before.

How does crowdsourced app testing work?

Selecting the right testers is the key to uTest’s success. Anyone can sign up at uTest to be a software tester, but not all are chosen to actually participate. They have to be vetted first. Roy’s team decides who has the right experience level, skill set, device, hobbies, interests, and so forth to qualify for each assignment. This ensures that an app is tested by the right kind of people – those who are most likely to offer relevant feedback.

As a reward for providing valuable information (such as coverage, usability and performance reports), these testers are offered monetary compensation. They also get to try out the coolest, cutting edge technology before anyone else. So, it’s not difficult for uTest to find willing, qualified volunteers to participate.

Users are ready 24/7 on location all around the world to test factors like:

  • Network density
  • App response on specific devices
  • Different battery states on the device
  • Different types of network (Wi-Fi, 4G, etc.)
  • How real users interact with the app

The richness and variety of this in-context data is unparalleled in any lab. Even if it was possible to create an artificial environment that could mimic real world use on this scale, it would be incredibly time consuming and costly. Solomon’s company gives developers access to infrastructure that’s already in place.

What do app designers discover?

Some of thenasty surprises app developers are glad to find out before their app is released commercially include:

  • Problems with scale resolution on different devices (scaling on the device is usually a bad idea)
  • Issues with memory and CPU consumption (this can determine where memory should be located)
  • Battery usage (everyone hates an app that drains their device)

Besides these common bugs, Roy says that the rise of location-aware apps is posingnew challenges. Now more than ever, having testers on-the-ground in every location where software architects expect to distribute an Android app is critical for realistic testing. Of course, a project manager doesn't have to hire a company like uTest to do in-the-wild testing. The IT department could just ask 70,000 of its closest friends to help out.


In this video, watch Cameron McKenzie (@potemcam) speak with Roy Solomon at AnDevCon 2012.

Recommended Titles

Head First Mobile Web by Lyza Danger Gardner
Professional Android 4 Application Development by Reto Meier
Mobile Development with C#: iOS, Android, and Windows Phone by Greg Shackles
What's New in Java 7? by Madhusudhan Konda
The Well-Grounded Java Developer By Martijn Verburg

Next Steps

Find out why you need local testing

Dig Deeper on Software development techniques and Agile methodologies

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Crowdsourcing can be a very effective tool in mobile device testing, especially with respect to load and performance. Where the problem comes in is in functional testing. I’ve seen many (many) instances where the “testers” that were used on a project lacked the skills needed to be good testers. But, for driving load, a mindless automaton can be a good thing.
Freelancers at UTest are to gain their reputation first before being selected for paid projects. I in fact know a few really strong testers who have UTest experience in the background.