Test plan・Test metrics (Part 2): Test result management
- 1. Test results are managed by two types of metrics: implementation status and deliverables.
- 2. Metrics used to manage test performance
- 3. Metrics used to manage test deliverables
- 4. Tracking test management results is paramount in test management
- 5. Determine the test management interval and division in the test plan
- 6. Next to test management is the test environment
Test results are managed by two types of metrics: implementation status and deliverables.
Software testing is invisible, so management is important . In the previous article, we introduced the progress management of the schedule in the test management, so in this article, we will introduce the metrics used to manage the test results . At the same time, we will introduce the most important test result tracking in the management of test results later in the article.
There are two types of test result management: test implementation status management and test product management . Of these, management of test implementation status can also be used for schedule progress management, but since result management is the main focus, this classification will be explained. Management of test deliverables is management of defects detected in the test. Let’s take a concrete look at them in order.
Metrics used to manage test performance
When the test work starts, multiple test workers will proceed with the actual test all at once according to the contents written in the test items, test procedure manual and test procedure manual. Some test items can be performed well, while others cannot .
The test items that cannot be executed are, for example, the test procedure cannot be advanced due to an existing bug, or the software to be tested this time has not been implemented yet . .. Therefore, if you do not manage the test implementation status well, the test will not proceed as planned in many cases. Now let’s look at the metrics that can be used for the test execution status.
(1) Test block rate
The test itself may not be able to be performed for various reasons, such as the function not being implemented or other bugs that prevent the test procedure from proceeding . The state in which this test cannot be performed is referred to as the test being blocked . It depends on how to define the state for each test item, but a simple definition uses the following four states.
- Not tested (I haven’t done the test because the schedule for testing is still ahead)
- Test completed (test result is OK)
- Waiting for retest ( Waiting for retest with modified software because the test result was NG)
- Block (waiting for the block condition to be released because the test could not be performed)
2 and 3 are the states where the test can be carried out, and 4 is the state where the test cannot be carried out even though the day when the test is scheduled to be carried out has come. Divide the number of blocked test items in these 4 by the number of test items that were scheduled to be implemented by that date to calculate the test block rate . The formula is as follows.
Test block rate (%) = Number of test items in the current block state X 100 /
(Number of test items that have been tested so far +
Number of test items waiting for retest up to now +
Number of test items in the block state at the moment)
If the test is carried out as planned, the block rate of the test is zero, so the target value for this metric is zero . However, in reality, the test is blocked for various reasons. If the block rate of the test is low, you can wait for the individual block conditions to be released, but if the block rate of the test exceeds 15%, it is better to consider some measures.
(2) Retest implementation rate
If you perform a test and the result is NG, you usually take measures for the NG item and retest . There are various reasons why retesting is necessary, such as software bugs and inadequate test environments, but retesting is still the case.
And, performing a retest costs twice as much as the original plan (labor cost, equipment time, etc.) for one test item, which reduces the efficiency of the test work . Therefore, by monitoring the retest implementation rate, it is possible to judge the status of deterioration in the efficiency of test work. The retest implementation rate is as follows in the formula.
Retest implementation rate (%) = Number of test items that have been retested so far X 100 /
(Number of test items that have been tested so far +
Number of test items waiting for retest up to now +
Number of test items in the block state at the moment)
If the same test item is retested three or four times, it may be better to use the numerator of the formula as the total number of retests. I think that this area should be judged based on the situation of retesting in each organization.
Metrics used to manage test deliverables
The test product (output) is the test result, but the defect that became the test NG is the most important among the product, so it is necessary to manage this defect properly. Now let’s look at the metrics that manage the defects found in the tests.
(1) Number of defects detected
The number of defects detected is the most frequently used metric for test products. By aggregating how many defects are detected every day and plotting them on a graph, it is possible to qualitatively judge whether the test is performing as planned .
Since it is a defect, the cause may be a software bug, a mistake in test data, a mistake in the test procedure, and so on. Therefore, let’s be careful that the defect detection rate is used to manage the quality status of the test itself (whether a good quality test is performed), not the quality status of the software to be inspected. The quality status of the software is managed by the metrics about the bugs that will appear later .
(2) Number of bugs detected
Since one of the purposes of software testing is to detect latent bugs and improve quality by debugging , the number of detected bugs is an important management item. After gaining experience in testing, how many bugs will be found based on the experience of testing software that has been improved before, taking into account the competence of this test team, the personnel of the design team, the difficulty of development, etc. Will be detected, or you will be able to estimate the approximate number of bugs. Whether the number of bugs currently detected is high or low with respect to the estimated number of bugs is a metric that estimates both the quality of the software under test and the quality of the test itself.
When the scale of software development is different, it may be better to use the bug density normalized by dividing by the KLOC number of the developed / modified source code as a metric.
(3) Bug detection rate (sleeping condition of bug detection curve )
The bug detection rate is a metric used to determine how long a test will last, or in other words, whether it is okay to end the test . The bug detection rate is calculated by the number of found bugs / number of test items and test man-hours . When this bug detection rate becomes smaller than the judgment value, it is decided that the test is terminated considering that a sufficient amount of latent bugs have been identified and there are few undetected latent bugs remaining .
For example, if the bug detection rate is less than 0.2 / 10 man-days, it means that one bug can be found by testing for 50 man-days, so it is a way of judging that it is okay to finish the test. Of course, the judgment value to be used depends on the level of quality required for the software, but it is also necessary to judge the end of the test in this way.
It should be noted, is often used to end judgment of the test, Gonberutsu curve Toka bug curve Toka reliability growth curve curve called, the vertical axis and the horizontal axis and test steps and test the amount created by plotting a bug detection number of cumulative .. The slope of this curve actually corresponds to the bug detection rate. The Gombelz curve and the bug curve determine whether it is okay to finish the test based on whether the curve is sleeping well, which is the same as the bug detection rate is small enough. ..
(4) Bug ratio ( ratio of bugs among defects)
Half of the purpose of software testing is to find potential bugs in the software and run a debug cycle to improve the quality of the software . Therefore, the bug ratio that what percentage of the detected bugs were software bugs is a metric that controls the quality of test work. Expressed as an expression, it is as follows.
Bug ratio (%) = Cumulative number of bugs detected so far X 100 /
Cumulative number of defects detected so far
If many defects are detected due to causes other than software bugs, such as incorrect test procedures and incorrect test data, it is better to consider that there is something wrong with the test quality and take measures as soon as possible. Probably.
(6) Bug retention status
The found bugs are handed over to the software designers and implementers to fix the bugs. Test again with the software that has finished fixing the bug, and if the test result is OK , the response to the detected bug will end .
When the number of detected bugs is small, the number of bugs that the person in charge of design and implementation also deals with is small, so the found bugs will be fixed more and more. However, when a new bug is detected at a speed that exceeds the debug speed of the person in charge of design and implementation , the number of bugs that have not been dealt with will continue to accumulate. It 's the beginning of the so-called death march .
Of the bugs detected in the test, the number of bugs that have not been addressed = the processing is stagnant is an important metric to judge whether the debugging work in the final stage of software development is going well . Assuming that the debugging ability of the design / implementation team is 10 bugs / day, if there are 100 stagnant bugs, it will take 10 days to complete debugging. If the release date comes by then, you will not be able to meet the release date unless you increase the number of people in the design and implementation team and improve your debugging ability. It is important to carefully manage the retention status of bugs so that such a situation does not occur.
Tracking test management results is paramount in test management
As I wrote in the introduction of the retention status of bugs in the metrics about test deliverables, the bugs found in the test are debugged and reconfirmed in the test that they have been fixed, and the response is finished. During this time, the state of the bug is (1) Bug detection (2) Reproduction of the bug (3) Investigation of the cause of the bug (4) Correction of the bug (5) Confirmation of the correction effect (6) Confirmation of the secondary defect (7) Reflection of the correction in the master tree (8) With the debug version software incorporating the correction Go through each stage of the retest and debug cycle .
Is detected each of the bug in the test, the debugging cycle ahead to what stage have, now who is following what are working, correspondence of the situation of the bugs that chase the bug of tracking and tracing is called a increase. If the number of bugs is small and the design / implementation team and the test team are in the same office, you can also track by the bug list written on the whiteboard of the office . On the other hand, if the number of bugs is large and the design / implementation team and the test team are located at separate locations, bug tracking is performed using some kind of bug tracking system .
In any case, bug tracing, or bug tracking, with what tools , how often , and who does it is the most important part of test management, so make sure you decide when designing your tests.
Determine the test management interval and division in the test plan
The management interval, such as how often tests are managed, should also be firmly determined during the test management planning stage. If you continue the same work every day like a factory production line, it is important to monitor the day and day and manage it in detail without overlooking the deviation from a small plan. However, even though there is a work procedure manual for software testing work, it is a series of different work every day, so there will be some deviation from the plan.
So, in the case of software test management 3 business days Toka 5 business days deviation from the plan has continued delay to monitor the situation every two times or three times more than is not seen is how the recovery measures in case In many cases, the management interval, such as performing, is sufficient.
It should be noted that, in order to perform the management of this kind of test, in order to monitor the entity analysis of deviation from the plan and the collection of performance do the analysis personnel and, on the basis of the result of the analysis is to determine the presence or absence of a problem indicate the measures to correct the deviation from the plan if the instructor of measures is required. It is also important to clearly determine the analysts and countermeasure instructors required for this test management at the test planning stage.
Next to test management is the test environment
In this article and the previous article, I introduced the test management to write in the test plan, but did you get the image? Once test management is in place, let’s talk about test environments in a subsequent article.
Discussion
New Comments
No comments yet. Be the first one!