Communication publiée dans un ouvrage (Colloques, congrès, conférences scientifiques et actes)
Do Automatically Generated Unit Tests Find Real Faults? An Empirical Study of Effectiveness and Challenges
Shamshiri, Sina; Just, Rene; Rojas, Jose Miguel et al.
2015In Proceedings of the 30th IEEE/ACM International Conference on Automated Software Engineering (ASE)
Peer reviewed
 

Documents


Texte intégral
CR-Submission-PID3864821.pdf
Preprint Auteur (492.74 kB)
Demander un accès

Tous les documents dans ORBilu sont protégés par une licence d'utilisation.

Envoyer vers



Détails



Résumé :
[en] Rather than tediously writing unit tests manually, tools can be used to generate them automatically — sometimes even resulting in higher code coverage than manual testing. But how good are these tests at actually finding faults? To answer this question, we applied three state-of-the art unit test generation tools for Java (Randoop, EvoSuite, and Agitar) to the 357 faults in the Defects4J dataset and investigated how well the generated test suites perform at detecting faults. Although 55.7% of the faults were found by automatically generated tests overall, only 19.9% of the test suites generated in our experiments actually detected a fault. By studying the performance and the problems of the individual tools and their tests, we derive insights to support the development of automated unit test generators, in order to increase the fault detection rate in the future. These include 1) improving coverage obtained so that defective statements are actually executed in the first instance, 2) techniques for propagating faults to the output, coupled with the generation of more sensitive assertions for detecting them, and 3) better simulation of the execution environment to detecting faults that are dependent on external factors, for example the date and time.
Disciplines :
Sciences informatiques
Auteur, co-auteur :
Shamshiri, Sina
Just, Rene
Rojas, Jose Miguel
Fraser, Gordon
McMinn, Phil
ARCURI, Andrea;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT)
Co-auteurs externes :
yes
Langue du document :
Anglais
Titre :
Do Automatically Generated Unit Tests Find Real Faults? An Empirical Study of Effectiveness and Challenges
Date de publication/diffusion :
2015
Nom de la manifestation :
Proceedings of the 30th IEEE/ACM International Conference on Automated Software Engineering (ASE)
Date de la manifestation :
9-13 November 2015
Titre de l'ouvrage principal :
Proceedings of the 30th IEEE/ACM International Conference on Automated Software Engineering (ASE)
Maison d'édition :
ACM
Peer reviewed :
Peer reviewed
Disponible sur ORBilu :
depuis le 25 juillet 2015

Statistiques


Nombre de vues
242 (dont 4 Unilu)
Nombre de téléchargements
4 (dont 3 Unilu)

citations Scopus®
 
207
citations Scopus®
sans auto-citations
181

Bibliographie


Publications similaires



Contacter ORBilu