A mobile testing maturity model

by Jess Ingrassellino

Mobile testers have concerns that don’t fit traditional test maturity models: multiple devices and operating systems on which to test, random network issues that potentially affect app performance, new ways of interacting with the software itself, new risks such as temperature and battery life, and customer use cases with higher levels of complexity. But you can still adapt traditional maturity models to include mobile testing concerns.

The TMMi model is a popular example of a traditional testing maturity model that can be adapted to mobile testing and used as a roadmap for improvement. The TMMi framework contains five levels of testing maturity. After taking the self-evaluation, which has been modified to include mobile testing concerns, you’ll see where your testing organization stands and then receive guidelines to move over time from the lower, less mature levels of the mobile TMMi model to the higher levels.

The TMMi model

Self-evaluation

As Watts Humphrey, known as the father of software quality, once said, "If you don’t know where you are, a map won’t help." So we’ll start with a self-assessment, to help you determine where your team is on the mobile-test maturity scale. The assessment will first tell you your level and then offer suggestions for what to work on next. Take this quiz, and find out where your team fits.

Where does mobile testing fall in your software development lifecycle (SDLC)?

  1. It doesn't. We test mobile whenever we feel like it. (0 points)
  2. We test mobile regularly, but only just before the feature comes out. (1 point)
  3. Mobile testing is included in our regular SDLC. (2 points)
  4. Mobile testing occurs early in our SDLC, along with other processes. (3 points)
  5. We consider mobile testability during the design phase (we have a “mobile first” mentality). (4 points)

What kinds of devices, simulators, or emulators do you use to conduct testing?

  1. We just use whatever we have. My phone, the project manager's phone—it doesn't matter. (0 points)
  2. We have some dedicated mobile devices or reliable emulators/simulators for testing our most popular devices and operating systems. We discovered these by implementing mobile analytics software on our app. (Google's Universal Analytics, Yahoo's Flurry, and Apple's App Analytics are all free to use.) (1 point)
  3. We have devices or emulators/simulators that map to all of our customer use cases, and we test all applicable software versions, too (e.g., iOS 9 and 10). (2 points)
  4. We have automated test suites for real-device and emulator testing. (3 points)
  5. We have a robust mobile device farm that we use to run automated tests whenever a feature is ready to be tested, along with policies for when to use what type of emulator, simulator, or physical device. (4 points)

What do you test?

  1. We just use the app without any specific test ideas in mind. (0 points)
  2. We test basic use cases that we believe will have the biggest customer impact. (1 point)
  3. We test basic use cases and edge cases that will affect all of our customers. (2 points)
  4. We have a robust suite of mobile regression tests and we conduct exploratory manual testing on a regular basis. (3 points)
  5. We have a robust suite of manual and automated mobile regression tests, and we conduct exploratory manual testing on a regular basis. (4 points)

How do you describe your mobile testing practices?

  1. We have no idea. (0 points)
  2. We know that we will run through some test scenarios before we release, and we try to do this in every release. (1 point)
  3. Mobile testing is part of our regular planning processes. We have reliable devices and emulators to use for any situation that might arise. (2 points)
  4. We have reliable mobile test practices that are embedded in our SDLC. In addition to having reliable test devices and emulators available to testers and developers, we use developer and tester feedback to improve upon current practice. (3 points)
  5. We design experiments with our mobile devices and use feedback from these experiments to improve how mobile applications are tested in our product. (4 points)

What does your intake step for new risks, training, and changes to policy look like?

  1. If we had a problem last time and remember it, we’ll test for it. (0 points)
  2. We have defined policies for how we test mobile devices that we periodically update when new devices are released and customer usage patterns change. (1 point)
  3. We have a formal test organization and a training program. This organization looks for new risks and revises our policies when necessary. (2 points)
  4. We have numerical quality measurements. If a project falls below quality standards, we can take corrective action. (3 points)
  5. Our test organization conducts planned experiments to test if changes actually address the new risks. (4 points)

Your total score from the five sections determines your team's mobile maturity level. 

0-4 = Initial

5-9 = Managed

10-14= Defined

15-19 = Measured

20 = Optimized

Understanding the levels

In the initial level of mobile test maturity, testing is an ad hoc activity that is done differently by different people at different times. Initial teams have no formal process to consider new risks, such as the time it takes to upload a new file to an app store, the effect of third-party APIs being down, or the impact of lost connectivity. Results at the initial level are inconsistent and testers work with no consistent processes.

At the managed level, the team has a set of defined expectations for which devices are tested when, though these are generally later in the lifecycle.

The defined level adds a formal test organization that:

  • Controls the policies
  • Conducts reviews in a formal manner
  • Captures nonfunctional tests (which become increasingly important on mobile)
  • Implements a training program

The measured and optimizing levels treat testing as a science by defining numerical measurements for test results and then conducting experiments to see if those results improve.

Here’s how to get there.

Becoming more mature

Once you’ve assessed your team, the next steps are to make a plan for improvement. Here are quick ways to move stages from initial to optimizing.

Moving from initial to managed

This is perhaps the trickiest move, because being at an initial level means that you have no defined process. Instead, people use their judgment and skill to do what needs to be done, when it needs to be done. Use these steps to define and document the way mobile testing is done.

Add mobile testing into your SDLC

  • Make a specific time for mobile testing to happen in a regular cadence. That can be as simple as mandating that mobile testing happens before a story is considered done, and once more before a release.
  • Define the types of testing to be accomplished, including functional, load, and any mobile-specific tests such as battery, heat, gyroscope, magnet, location, or loss/gain of connectivity.
  • Dedicate time to mobile testing, which needs to be accounted for by the team (for agile development), or on the project plan (in traditional approaches). That includes time to upload the software to the app stores and the customer’s end-to-end download/install/run process (for native applications).

Define which devices will be tested and how much testing to do on each device

  • Learn what devices your customers use by implementing mobile analytics software on your app. Universal Analytics by Google, Flurry by Yahoo, and Apple App Analytics are all free to use. 
  • Get dedicated devices that align with the most popular devices your customers use. To keep your budget under control, set up dedicated emulators/simulators for less commonly used devices, as indicated by your app analytics.
  • Define levels of support for less common customer devices. For example, on the most popular devices running your app, you will likely do full regression and exploratory testing to ensure a seamless customer experience. However, for less popular devices running more obscure operating systems (an older phone running Opera, for example), you might only run a core set of tests.

Write test scenarios

  • Define the way testing will be planned and documented. Be clear about what is expected in terms of “sufficient evidence” that testing was planned and occurred. That may mean that you include test planning as a separate step in the project plan.
  • Test scenarios should include elements that differ between platforms such as phone, tablet, and laptop.

Moving from managed to defined

Moving from managed to defined involves putting a structure in place to sustain knowledge. This includes defining roles within the test organization and training new hires.

Test organization

  • Clearly define reporting relationships and lines of authority within the test group. Who has the authority to change the standards over time, and which roles perform what kinds of testing? That does not necessarily mean creating a test management role, but it does mean creating a test management activity. This group will be responsible for the mobile test processes defined below.

Training

  • Summarize the processes with a walkthrough of the way mobile testing is done, along with a simulation to allow employees to learn the tools and process.

Test lifecycle and integration

  • Insert evaluation points within software development for testing to get involved earlier and “shift left.” Make these clear, demonstrable, and documented—for example, as a check-off step in a project management tool or agile tool. One key area here is defining requirements for nonfunctional tests.

Nonfunctional testing 

  • Define methods to test nonfunctional elements, such as performance, load, and security. Provide methods to determine which types of tests to run, how much, why, and by which role, to prevent misunderstanding. Mobile devices can have new security concerns—for example, what happens when a phone with no password is stolen?
  • Make a checklist for types of new mobile nonfunctional testing, and explain how the team will determine how much testing to do on this project.

Peer review

  • Define steps for testers to share and review work, such as test planning review, pair testing, or session debriefs. This should include all types of mobile testing, from functionality to device compatibility.

Moving from defined to measured

Test measurement

  • In addition to answering the question of how much testing is done on a project, test measurement should also answer questions about how well testing was done. Measures can be both quantitative (numeric) and qualitative (descriptive) at this level.
  • Determine measures for tests and how those measures will be captured, stored, and retrieved. Example quantitative measures can include the time to execute a test case/scenario, the number of tests performed, or how many defects of each severity level the tests are finding. Qualitative measures might include test charters or reports from session-based testing. The challenge with testing mobile is the constant changes needed in testing to accommodate the continuous arrival of new devices.

Product quality evaluation

  • This is an overall measure of the product quality prior to release. Testing should be involved in the requirements step to determine product quality goals as requirements for testing. Then testers can report on those goals. For mobile testing, product quality evaluation may include an entire host of new issues, such as speed of product download, performance on older and underpowered devices, and the experience on the app store.

Advanced reviews

  • Advanced reviews combine early-stage testing (static testing of documents and plans) with dynamic testing (the actual software testing). These advanced reviews should be tied into bug reports to determine if bugs escaped during the requirements phase. In this way, retrospectives or other planning meetings can be used to determine how testing can be improved to find issues earlier in the SDLC.

Moving from measured to optimizing

Defect prevention

  • Similar to advanced review, this is a review of defects to look for common root causes. At the optimizing level, defect prevention is a formal process, conducted periodically, that creates artifacts that can be audited.

Quality control

  • Institute testing as a statistically defined quality process where the process is centralized. Define gradients of tolerance for test projects. Evaluate whether testing is within tolerance and, if it is not, take corrective action. With mobile devices, that can move from testing all the devices to sampling them based on customer use patterns, defect patterns, and risk. The same kind of math can decrease the number of devices to test and reduce nonfunctional tests.

Test process optimization

  • Create a formal method to experiment with testing processes. That is, periodically change the process for a project, run the project, and compare measurement results to other projects. If the results improve, change the defined process to make this activity the new standard. Retrospectives on an agile team can accomplish this method if the experiments are formalized.

Be flexible

This maturity model provides a template for introducing reliable mobile testing processes to your organization. Using a testing maturity model can be helpful for organizations that are rapidly expanding and need to deal with implementing mobile testing at scale, alongside other transformations in the organization. This model should be used as a general guide, and not as required doctrine. Your organization may need to be flexible with some of these instructions if it’s not capable of certain actions.

Additional reading

Getting better at testing is always something an organization should strive for. If you're serious about having a quality mobile application with a valuable amount of users, then you need to be serious about achieving the best mobile testing process possible.

Educate yourself further about mobile testing best practices by following this learning track, and consider these additional resources to help you improve your testing capabilities.