Reference : A Replication Study on the Usability of Code Vocabulary in Predicting Flaky Tests
Scientific congresses, symposiums and conference proceedings : Paper published in a book
Engineering, computing & technology : Computer science
Security, Reliability and Trust
http://hdl.handle.net/10993/46924
A Replication Study on the Usability of Code Vocabulary in Predicting Flaky Tests
English
Haben, Guillaume mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal >]
Habchi, Sarra mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal >]
Papadakis, Mike mailto [University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS) >]
Cordy, Maxime mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal >]
Le Traon, Yves mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal >]
May-2021
18th International Conference on Mining Software Repositories
Yes
International
18th International Conference on Mining Software Repositories
from 17-05-2021 to 19-05-2021
[en] Software testing ; regression testing ; flakiness
[en] Abstract—Industrial reports indicate that flaky tests are one of the primary concerns of software testing mainly due to the false signals they provide. To deal with this issue, researchers have developed tools and techniques aiming at (automatically) identifying flaky tests with encouraging results. However, to reach industrial adoption and practice, these techniques need to be replicated and evaluated extensively on multiple datasets, occasions and settings. In view of this, we perform a replication study of a recently proposed method that predicts flaky tests based on their vocabulary. We thus replicate the original study on three different dimensions. First we replicate the approach on the same subjects as in the original study but using a different evaluation methodology, i.e., we adopt a time-sensitive selection of training and test sets to better reflect the envisioned use case. Second, we consolidate the findings of the initial study by building a new dataset of 837 flaky tests from 9 projects in a different programming language, i.e., Python while the original study was in Java, which comforts the generalisability of the results. Third, we propose an extension to the original approach by experimenting with different features extracted from the Code Under Test. Our results demonstrate that a more robust validation has a consistent negative impact on the reported results of the original study, but, fortunately, these do not invalidate the key conclusions of the study. We also find re-assuring results that the vocabulary-based models can also be used to predict test flakiness in Python and that the information lying in the Code Under Test has a limited impact in the performance of the vocabulary-based models
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > Security Design and Validation Research Group (SerVal)
http://hdl.handle.net/10993/46924

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Open access
MSR21_FlakyReplication.pdfAuthor postprint828.64 kBView/Open

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.