Reference : Test Case Selection and Prioritization Using Machine Learning: A Systematic Literatur...
Scientific journals : Article
Engineering, computing & technology : Computer science
Security, Reliability and Trust
http://hdl.handle.net/10993/48384
Test Case Selection and Prioritization Using Machine Learning: A Systematic Literature Review
English
Pan, Rongqi [University of Ottawa > EECS]
Bagherzadeh, Mojtaba [University of Ottawa > EECS]
Ghaleb, Taher [University of Ottawa > EECS]
Briand, Lionel mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SVV >]
In press
Empirical Software Engineering
Kluwer Academic Publishers
Yes (verified by ORBilu)
International
1382-3256
1573-7616
Netherlands
[en] Software Testing ; Machine Learning ; Test case Prioritization Test case Selection
[en] Regression testing is an essential activity to assure that software code changes do not
adversely a ect existing functionalities. With the wide adoption of Continuous Integration (CI) in
software projects, which increases the frequency of running software builds, running all tests can be
time-consuming and resource-intensive. To alleviate that problem, Test case Selection and Prioritiza-
tion (TSP) techniques have been proposed to improve regression testing by selecting and prioritizing
test cases in order to provide early feedback to developers. In recent years, researchers have relied on
Machine Learning (ML) techniques to achieve e ective TSP (ML-based TSP). Such techniques help
combine information about test cases, from partial and imperfect sources, into accurate prediction
models. This work conducts a systematic literature review focused on ML-based TSP techniques,
aiming to perform an in-depth analysis of the state of the art, thus gaining insights regarding fu-
ture avenues of research. To that end, we analyze 29 primary studies published from 2006 to 2020,
which have been identi ed through a systematic and documented process. This paper addresses ve
research questions addressing variations in ML-based TSP techniques and feature sets for training
and testing ML models, alternative metrics used for evaluating the techniques, the performance of
techniques, and the reproducibility of the published studies. We summarize the results related to
our research questions in a high-level summary that can be used as a taxonomy for classifying future
TSP studies.
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > Software Verification and Validation Lab (SVV Lab)
European Commission - EC, NSERC Canada Research Chair and Discovery programs
Researchers ; Professionals ; Students
http://hdl.handle.net/10993/48384
H2020 ; 694277 - TUNE - Testing the Untestable: Model Testing of Complex Software-Intensive Systems

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Open access
A_survey_of_the_application_of_ML_techniques_for_test_case_prioritization-14.pdfAuthor preprint4.99 MBView/Open

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.