“If we follow a risk-based testing approach, can we update the risk levels periodically?” Questions like this one from customers made me realize that risk-based testing in its current form is static with no provision for periodic re-assessment of risk based on real-time test results. I felt there is a need to enhance the existing approach so as to assure service/business resilience. In fact, I have been seeing an evident trend of shift from Testing/Quality Assurance to Business Assurance as customers realize the strategic and operational implications of Quality Testing; that it directly impacts brand value, market share and topline while also enabling quick completion of projects at optimal cost.
I therefore put together a testing methodology that focuses on two key parameters for risk assessment. The first, business impact, is based on criticality of a component or module in the application that remains static for a particular test case. The second is probability of failure. You need to continuously assess this by analyzing the history of test execution results. I have classified 3 levels for each of these factors: high, medium and low. There are therefore nine different risk profiles. The profile where the impact and probability of failure are both high carries the greatest risk; the one where both factors are low is the profile with the least risk.
The next step is selecting an optimal set of test cases. I think the choice of test cases for the test suite should be based on whether the test requirement is impacted, and if not, whether it is dependent on an impacted requirement. The criticality of the test requirement is the third factor. I suggest defining a set of rules for reference based on the risk appetite and the combination of the levels associated with the above three factors. You can select or eliminate lower risk profile test cases in the rules table depending on the risk appetite. I also recommend another round of optimization at run time based on the actual quality of the build being tested. Here again, you require a set of rules that enable dynamic decision-making on test completion.
With this new approach, objective decision-making is possible on the completion of test execution. You’ll also have a test management system to access the requirement and test case details, and to upload the generated set of test cases. The two-level optimization methodology will help you maintain a regression test suite optimized to at least 30%. Concurrently occurring regression-specific defects will be fewer in number owing to periodic re-assessment. You can also assess software quality during the initial stages of testing with greater confidence.
I am confident that this deterministic approach will be much more dynamic than the current intuitive approach to risk-based testing. What do you think? Do you have any other ideas that can be incorporated in this methodology?