Paper published in a book (Scientific congresses, symposiums and conference proceedings)
Evaluating Search-Based Techniques With Statistical Tests
Arcuri, Andrea
2018In The Search-Based Software Testing (SBST) Workshop


Full Text
Author preprint (314.28 kB)
Request a copy

All documents in ORBilu are protected by a user license.

Send to


Abstract :
[en] This tutorial covers the basics of how to use statistical tests to evaluate and compare search-algorithms, in particular when applied on software engineering problems. Search-algorithms like Hill Climbing and Genetic Algorithms are randomised. Running such randomised algorithms twice on the same problem can give different results. It is hence important to run such algorithms multiple times to collect average results, and avoid so publishing wrong conclusions that were based on just luck. However, there is the question of how often such runs should be repeated. Given a set of n repeated experiments, is such n large enough to draw sound conclusions? Or should had more experiments been run? Statistical tests like the Wilcoxon-Mann-Whitney U-test can be used to answer these important questions.
Disciplines :
Computer science
Author, co-author :
Arcuri, Andrea;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT)
External co-authors :
Language :
Title :
Evaluating Search-Based Techniques With Statistical Tests
Publication date :
Event name :
The Search-Based Software Testing (SBST) Workshop
Event date :
Main work title :
The Search-Based Software Testing (SBST) Workshop
FnR Project :
FNR3949772 - Validation And Verification Laboratory, 2010 (01/01/2012-31/07/2018) - Lionel Briand
Available on ORBilu :
since 18 March 2018


Number of views
82 (6 by Unilu)
Number of downloads
3 (3 by Unilu)

Scopus citations®
Scopus citations®
without self-citations


Similar publications

Contact ORBilu