Overview・Items to write in the software test report
Write test results and test evaluations in the software test report
When the test work is completed, the results will be summarized in a test report . Individual defects found in the test are managed by the tracking system and EXCEL table, which are DBs for defect management, but the amount of information is enormous, so in order to effectively utilize the test results, the entire test is summarized. It is common to create a test report. And in this test report, let’s organize and write two things , the test result and the test evaluation .
Test results are basic information on software quality in release judgment
The test result is the status of defects and bugs found in the test . What kind of bugs and bugs were found, how many bugs were fixed, what kind of bugs were postponed, what kind of bug detection convergence status, etc. were the targets of this test. Organize the information for judging the quality of the software and compile it as a test result.
This test result is the most important information for deciding whether or not to release the software, so it is important to write it as quantitatively and concretely as possible. In particular, for residual bugs that were sent to the next time because they could not be fixed in time, or for disappearance bugs that were detected once but could not be reproduced after that , the conditions of occurrence, frequency of occurrence, degree of influence when they occurred, etc. but the degree of influence when that occurred in the market would be a good idea to write easy to understand the contents, such as is known.
Along with information on residual bugs and missing bugs, what is important in release determination is the sufficiency of testing whether there are sufficiently few potential bugs that have not yet been found . Information such as the test completion rate for the test plan and the estimated number of residual bugs based on the reliability growth curve can be used as information to determine whether sufficient quantity and quality tests have been performed to identify potential bugs . Write as quantitatively and concretely as possible.
Test evaluation is to improve the testing process and grow the team
The test result is information that shows the quality of the software to be tested, but the test evaluation is an evaluation of the quality of the test plan and the work performed. Write an evaluation by comparing the test plan and the test result from the viewpoint of the test content, schedule planning, and test efficiency, whether the planned test content was valid and the actual test achieved the planned result. To go. Test evaluation, this test out wash was good points and bad was the point of the work leading to process improvement of future test work there is also a surface that important information for. There are various ways to write a test evaluation, but it will be easier to understand if you write it from the following three perspectives.
- Evaluation of test content : Was the quantity and quality of the test optimal for guaranteeing the quality of this software?
- Evaluation of test schedule : Was the test schedule appropriate?
- Evaluation of test efficiency : Was the cost performance plan for the test work appropriate?
(1) Evaluation of test contents
When you plan a test of whether to focus on what test policy make a, carried out in accordance with the test policy kind of test to and scale and introduced equipment and personnel the plan will continue to. Bugs are detected as a result of actual testing according to the plan , but the evaluation of the test content is whether the bugs detected in the test meet the test policy considered at the planning stage . We will compare the test density and bug density between the plan and the actual results, and evaluate the quality of this test content by comparing it with the past actual results.
First amount of testing whether or the number of test items was reasonable, and the number of each test item types of the entire test the number of items and test, their number and bug density of bugs detected by the test based on , test items Evaluate the validity of the number plan . After evaluating the amount of test, it is also necessary to evaluate the validity of setting the priority area of the test . In the method of setting the priority area of the test, the priority area may be determined based on the function of the software, or the priority area of the test may be determined assuming the area where the bug is likely to be latent . Evaluate from the perspective of whether the bugs in the priority areas considered in the test plan were sufficiently detected.
Also, for the purpose of testing, we may focus on comprehensive identification of simple bugs, or identify bugs that occur under complicated conditions or bugs that occur once in hundreds of times and are difficult to detect. In some cases, the emphasis is on. Regardless of the way of thinking when setting the priority area of the test, the test is based on the test man-hours input and the number of bugs found from the viewpoint of whether the bugs in the priority area set at the time of planning were detected as originally planned. Evaluate the validity of your plan.
(2) Evaluation of the test schedule
It’s also a good idea to evaluate your test scheduling . Dates of the test work as planned whether the proceeds to, than the plans were delayed for or than the plan was completed early whether, in the test schedule monitoring and management of progress or has had made as planned. If there is a part of the test schedule that did not go according to plan , evaluate the quality of the test schedule , such as whether there was any omission in the estimation at the time of planning .
The test schedule often deviates from the plan , especially from the second half of the test to the end of the test . There is a risk that the test process at the final stage of software development will be out of schedule due to various factors , such as the discovery of more bugs than expected, the correction of required specifications during the test, and the rework of the test. there is. We will evaluate the test schedule from the viewpoint of whether the schedule was managed well and the schedule was kept, and whether the necessary and sufficient risk schedule was incorporated in the schedule plan in the first place .
(3) Evaluation of test efficiency
Software testing is divided into automated testing and manual manual testing . The more automated tests there are, the better the test efficiency (number of test items / man-hours of test workers) in one test. On the other hand, automated testing requires development for automation first, and maintenance man-hours are required to keep up with the changing software even after development. Let’s evaluate whether the ratio of automated test to manual test was the best cost performance for this test.
Basically, one test procedure confirms one test item, but if you devise a test procedure, you can confirm multiple test items with one test procedure . The more you devise, the more efficient your testing will be. This mainly affects the test efficiency of manual tests, but in this test, we evaluate the test efficiency from the viewpoint of how many items can be confirmed on average per test procedure. Is also a good thing.
Finally, evaluate the efficiency of the test in terms of the environment specified for the test and the operating rate of the equipment . If you look in terms of cost only, of the environment and equipment utilization rate is 100% , but is good is, the operating rate is the reverse 100% ring test waiting by empty waiting for border and equipment because there is that happens, unconditionally It cannot be said that 100% utilization rate is good. Generally speaking, when the operating rate of the environment and equipment is about 70% to 90%, the overall test efficiency including the test personnel and schedule planning is often good . Let’s look back on the test and evaluate the environment and the operating rate of the equipment this time.
When the test is over, organize the results and connect to the next
Testing is the final phase of a soft release, so you 're often caught up in the slapstick just before the release . In some cases, you may be testing until just before the release decision. To what such a case, the test results and evaluation of the test for the test after the end of the test firmly together in the test report , the Try to connect to the improvement of the next test.
In the following articles, I will introduce each item of the test report a little more concretely based on the experience of Father Gutara, so please have a look if you are interested.
Discussion
New Comments
No comments yet. Be the first one!