e-testing Blog

How to Use Metrics and KPI’s to Assess the Net Value of Testing

ReqTest logoEven as a growing number of organizations implement comprehensive (and costly!) testing methodologies, surprisingly few of them track their investments to determine how profitable their testing strategy is – or fails to be. There are lots of different metrics that can help, all of them useful for different reasons. Some metrics tell you whether your company tests thoroughly enough or sufficiently early on, while others measure the effectiveness of the testing you conduct; there are also metrics that calculate how much money your company spends fixing bugs and other errors. In this article, you’ll learn three practical metrics that you can use in the testing process. You’ll also get practical tips on how to select and implement testing metrics.

Overview of different metrics

 

Defect Detection Percentage
Defect Detection Percentage (DDP) is a metric you can use to measure the proportion of defects found during testing compared with the number found after testing is complete.

Testing Defects

This is a measure of the quality of testing – your company’s testing process has not been particularly effective if users find most of the errors after you launch the systems, as opposed to your development team catching them during the testing phase. DDP allows you to compare the quality of testing before and after you make a change in your testing approach. For instance, if you change the methodology or improve the testers’ skills through training, DDP should show a clear improvement. Measure it once, implement changes and measure it again.

The technique works like this: count the number of defects you find during a testing period. Let’s say you found 80 defects during testing. Launch the system and continue to count the number of defects for a given period, for example, until the next deployment or two months after deployment. The new test shows that you found 20 additional defects during this period. In total, therefore, 80% of the defects were found during testing, which would be considered very good in most systems.

You can use DDP in many situations. You can use it to compare the quality of a particular testing level over time, for example between system testing of version 1 and system testing of version 2. It also makes it possible to compare quality between test levels, for example between system testing and acceptance testing. A good DDP ratio is 80-90%.

Even if you are measuring DDP and getting good marks, check your process before you congratulate yourself. If the percentage of defects found pre-launch is very high (say 95%), it might be because your tests were very good, but it could also be that users are simply not using the system. If you get stellar results when you compare two test levels, it could be that effective early testing is catching many errors, or later test levels is catching them poorly.

Cost of defects
It’s very useful to know how much money each defect costs you on average. For instance, you can calculate cost in combination with DDP, as described above, which will clearly show you what you gain by raising the DDP. It’s easy to produce a reasonably reliable estimate of the cost of defects. First, choose a project and examine the last 30 defects in production. Find out which people were involved in debugging and bug fixing, and how much time each activity took. Here’s an example:

Activity

If your organization requires that each case pass through a long decision chain, with defect reports, or requirement changes having to go through steps like a change control board, management group, etc., you’ll probably end up with a much higher figure than this table shows. Keep in mind that the figure above is an illustration, and of course there are problems that can be solved in a few minutes, while others take weeks to fix.

The next step is to multiply the number of hours by the hourly rate your organization uses for each role. Many use an hourly rate around £90 for in-project budgeting; your best bet is to find out what rate your organization uses as a cost basis.

If the average hourly rate is £90, 20 hours means that every bug found in production costs £18,200 to fix. The next step is to make the corresponding calculation for the other levels of testing and requirements, from requirements management through to component testing and acceptance testing. You can use this as the basis for calculating how much defects cost at each level. Then it’s pretty straightforward to calculate how much you can gain by changing a test process so that more defects are found earlier.

Defect leakage
The reason for measuring defect leakage is to make sure that defects are found when they should be found, i.e. as early as possible, like all the way back to the point at which the defects sneak in. By identifying which test levels frequently overlook defects, you can focus your efforts on improving the right test levels. You can also identify the review steps that work better or worse than average. You can illustrate defect leakage in a table like this:

Defects Table

In the leftmost column, the table shows design activities from requirements to implementation, while quality assurance activities such as testing and review are arranged along the top. In reality, it would be fairly uncommon for an organization to actually test at the number of points that this table shows, so customize the table according to the activities that your organization utilizes.

In the example above, 30 errors were found in conjunction with requirements review. But when you summarize the errors that are related to requirements across all test and examination activities, there’s a total of 50 requirements-related defects, which means that the team found 20 defects of 50 (or 40% of defects) during the requirements review. It’s not possible in the real world to find 100% of requirements defects during review, but if you conduct a detailed analysis of each defect, you can divide them into defects of various types and determine what prevented the finding of each defect. That makes it possible to improve the process so that you’ll catch similar defects earlier next time, or preferably avoid introducing them altogether when you’re writing requirements. In this example, most of the remaining requirements defects were found when the design was reviewed.

Now, look at the defects related to the application code in the example above; the team found 20 defects during code review. As before, there were a total of 50 defects, which means that 30 defects or 60% of defects were not found during code review. The majority of these defects were found during integration testing. It’s safe to conclude that the code review was a failure, and that the review methodology should be improved before the time comes for the next code review.

The process of selecting metrics

1. Define goals and objectives
Each metric answers different questions. So the process of selecting metrics starts with determining which questions you want to answer. Things you might ask include “Did we put the right amount of time into the right steps?”, “How much do defects really cost?” and “How effective are our testing processes?” The formulation of your objectives should be SMART: specific, measurable, accepted, realistic and time-based.

2. Select metrics
Compare different metrics to each other to see which ones best correspond to the objectives. Often a combination is best.

3. Support from management and internal marketing
It’s important that management promotes the metrics internally to ensure that people accept them. While project team members frequently see metrics as a necessary evil, it’s important for the team leaders to demonstrate the long-term benefits of finding defects at the right time. It’s also important that leaders make it clear that they will not use the metrics to measure personal performance, so employees don’t sweep problems under the rug or try to game the system.

Process for introducing metrics

 

  • Align metrics to the organization
  • Customize the methodology. You may need to add a new level of review to take advantage of some metrics
  • Customize tools. In a bug tracking tool, you may need to add certain fields and ensure that everyone uses the fields consistently in order to get the information you are hoping to get!
  • Develop templates and examples. You need to make sure that collecting the metrics is easy. Clear examples are a must!
  • Develop routines. Define who is responsible for developing the metrics, and when and to whom they need to be reported
  • Put supports in place to help the people charged with gathering metrics, so that they can get it done
  • Evaluate the metrics your organization has developed
  • Complement them with additional metrics, as your needs grow

Summary

By using different metrics, you can create a solid foundation for decisions that will make your organization’s testing work more effectively. If you hope to draw any meaningful conclusions you should combine metrics and apply them on multiple occasions, and not as a one-time event. Remember that each metric shows different things:

  • Defect Detection Percentage measures the quality of test levels
  • Cost of defects tells you how much mistakes cost the organization – in time and money
  • Defect leakage shows whether you can improve any of the test or review activities to find even more errors, earlier than before

Of course, this is only a selection of common metrics; there are many more metrics than those reviewed here.

About the author

Ulf Eriksson is one of the founders of ReQtest, an online bug tracking software hand-built and developed in Sweden. ReQtest is the culmination of Ulf’s decades of work in development and testing. Ulf is a huge fan of Agile and counts himself as an early adopter of the philosophy, which he has abided to for a number of years in his professional life as well as in private.

Ulf’s goal is to life easier for everyone involved in testing and requirements management, and he works towards this goal in his role of Product Owner at ReQtest, where he strives to make ReQtest easy and logical for anyone to use, regardless of their technical knowledge or lack thereof.

The author of a number of white papers and articles, mostly on the world of software testing, Ulf is also slaving over a book, which will be compendium of his experiences in the industry. Ulf lives in Stockholm, Sweden.

As a long-standing partner of e-testing, we are pleased to offer ReQtest’s comprehensive portfolio of testing solutions through our own expert consultancy network.

CLICK HERE FOR UPDATES

Subscribe to our RSS feed and get the latest updates in your inbox weekly

logo