Comparing White-box and Black-box Test Prioritization
English
Henard, Christopher[University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > >]
Papadakis, Mike[University of Luxembourg > Faculty of Science, Technology and Communication (FSTC) > Computer Science and Communications Research Unit (CSC) >]
Le Traon, Yves[University of Luxembourg > Faculty of Science, Technology and Communication (FSTC) > Computer Science and Communications Research Unit (CSC) >]
2016
38th International Conference on Software Engineering (ICSE'16)
Yes
38th International Conference on Software Engineering (ICSE'16)
14-05-2016 to 22-05-2016
Austin, TX
USA
[en] Regression Testing ; White-box ; Black-box
[en] Although white-box regression test prioritization has been well-studied, the more recently introduced black-box prioritization approaches have neither been compared against each other nor against more well-established white-box techniques. We present a comprehensive experimental comparison of several test prioritization techniques, including well-established white-box strategies and more recently introduced black-box approaches. We found that Combinatorial Interaction Testing and diversity-based techniques (Input Model Diversity and Input Test Set Diameter) perform best among the black-box approaches. Perhaps surprisingly, we found little difference between black-box and white-box performance (at most 4% fault detection rate difference). We also found the overlap between black- and white-box faults to be high: the first 10% of the prioritized test suites already agree on at least 60% of the faults found. These are positive findings for practicing regression testers who may not have source code available, thereby making white-box techniques inapplicable. We also found evidence that both black-box and white-box prioritization remain robust over multiple system releases.