Automated test-case generations; Automatic testcase generation; Bug detection; Bug reports; Dynamic symbolic executions; Labour-intensive; Search-based software testing; Test case generation; Test inputs; Unit tests; Software
Résumé :
[en] INTRODUCTION: The pursuit of automating software test case generation, particularly for unit tests, has become increasingly important due to the labor-intensive nature of manual test generation [6]. However, a significant challenge in this domain is the inability of automated approaches to generate relevant inputs, which compromises the efficacy of the tests [6]. In this study, we address the critical issue of enhancing the quality of automated test case generation.We demonstrate the presence of valuable relevant inputs within bug reports, showcasing their potential for improving software testing. To harness these inputs effectively, we introduce BRMiner, a novel tool designed for the extraction of relevant input values from bug reports. Our approach includes the modification of EvoSuite, a prominent automated test case generation tool, enabling it to incorporate these extracted inputs. Through systematic evaluation using the Defects4J benchmark, we assess the impact of BRMiner inputs on test adequacy and effectiveness, focusing on code coverage and bug detection. This study not only identifies the relevance of bug report inputs but also offers a practical solution for leveraging them to enhance automated test case generation in real-world software projects. In the realm of automated test case generation, methods like Dynamic Symbolic Execution (DSE) [2] and Search-Based Software Testing (SBST) have been prevalent [3]. Despite their strengths, these techniques often struggle with generating contextually appropriate and realistic inputs [6]. This study, therefore, emphasizes the untapped potential of bug reports as a source of such inputs. Bug reports, rich in valid, human-readable inputs, are particularly beneficial for enhancing test coverage and detecting bugs. BRMiner, automates the extraction of relevant test inputs from bug reports, significantly enhancing the efficiency of test case generation. This is achieved by incorporating these inputs into EvoSuite, a leading SBST tool. The study showcases the advantages of integrating a feature in EvoSuite for external inputs, particularly from bug reports, to improve its efficacy in conjunction with DSE. Related research in automatic test case generation provides context to our work. TestMiner [6], unlike BRMiner, extracts literals from existing tests for domain-specific values, and approaches like K-Config [4] and LeRe [7], focusing on compiler testing using bug report information, diverge from our approach. PerfLearner [1], which uses bug reports for extracting execution commands for performance bugs, also differs from BRMiner's focus on bug detection.
Plein, Laura ; University of Luxembourg, Luxembourg
KABORE, Abdoul Kader ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SNT Office > Project Coordination
HABIB, Andrew ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust > TruX > Team Tegawendé François d A BISSYANDE
KLEIN, Jacques ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX
Lo, David ; Singapore Management University, Singapore
ACM and ACM Special Interest Group on Software Engineering Centro Cultural de Belem et al. Faculty of Engineering of University of Porto IEEE Computer Society and IEEE Technical Council on Software Engineering INESC-ID
Subventionnement (détails) :
This work is supported by funding from the Fonds National de la Recherche Luxembourg (FNR) under the Aides \u00E0 la Formation- Recherche (AFR) (grant agreement No. 17185670).
Xue Han, Tingting Yu, and David Lo. 2018. Perflearner: Learning from bug reports to understand and generate performance test frames. In Proceedings of the 33rd ACM/IEEE international conference on automated software engineering. 17-28.
James C King. 1976. Symbolic execution and program testing. Commun. ACM 19, 7 (1976), 385-394.
Annibale Panichella, Fitsum Meshesha Kifetew, and Paolo Tonella. 2017. Au-tomated test case generation as a many-objective optimisation problem with dynamic selection of the targets. IEEE Transactions on Software Engineering 44, 2 (2017), 122-158.
Md Rafiqul Islam Rabin and Mohammad Amin Alipour. 2021. Configuring test gen-erators using bug reports: A case study of gcc compiler and csmith. In Proceedings of the 36th Annual ACM Symposium on Applied Computing. 1750-1758.
Sina Shamshiri, René Just, José Miguel Rojas, Gordon Fraser, Phil McMinn, and Andrea Arcuri. 2015. Do automatically generated unit tests find real faults an empirical study of effectiveness and challenges (t). In 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 201-211.
Luca Della Toffola, Cristian-Alexandru Staicu, and Michael Pradel. 2017. Saying 'hi!' is not enough: mining inputs for effective test generation. In Proceedings of the 32nd International Conference on Automated Software Engineering. IEEE Computer Society, 44-49. https://doi.org/10.1109/ASE.2017.8115617
Hao Zhong. 2022. Enriching compiler testing with real program from bug report. In Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering. 1-12.