References of "Software Testing, Verification and Reliability"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailEmpirical Evaluation of Mutation-based Test Prioritization Techniques
Shin, Donghwan; Yoo, Shin; Papadakis, Mike UL et al

in Software Testing, Verification and Reliability (2019), 29(1-2),

Detailed reference viewed: 117 (9 UL)
Full Text
Peer Reviewed
See detailRandom or Evolutionary Search for Object-Oriented Test Suite Generation?
Shamshiri; Rojas; Gazzola et al

in Software Testing, Verification and Reliability (2018), 28(4), 1660

An important aim in software testing is constructing a test suite with high structural code coverage – that is, ensuring that most if not all of the code under test has been executed by the test cases ... [more ▼]

An important aim in software testing is constructing a test suite with high structural code coverage – that is, ensuring that most if not all of the code under test has been executed by the test cases comprising the test suite. Several search-based techniques have proved successful at automatically generating tests that achieve high coverage. However, despite the well-established arguments behind using evolutionary search algorithms (e.g., genetic algorithms) in preference to random search, it remains an open question whether the benefits can actually be observed in practice when generating unit test suites for object-oriented classes. In this paper, we report an empirical study on the effects of using evolutionary algorithms (including a genetic algorithm and chemical reaction optimization) to generate test suites, compared with generating test suites incrementally with random search. We apply the EVOSUITE unit test suite generator to 1,000 classes randomly selected from the SF110 corpus of open source projects. Surprisingly, the results show that the difference is much smaller than one might expect: While evolutionary search covers more branches of the type where standard fitness functions provide guidance, we observed that, in practice, the vast majority of branches do not provide any guidance to the search. These results suggest that, although evolutionary algorithms are more effective at covering complex branches, a random search may suffice to achieve high coverage of most object-oriented classes. [less ▲]

Detailed reference viewed: 155 (27 UL)
Full Text
Peer Reviewed
See detailSimulink Fault Localisation: an Iterative Statistical Debugging Approach
Liu, Bing UL; Lucia, Lucia UL; Nejati, Shiva UL et al

in Software Testing, Verification and Reliability (2016), 26(6), 431-459

Debugging Simulink models presents a significant challenge in the embedded industry. In this work, we propose SimFL, a fault localization approach for Simulink models by combining statistical debugging ... [more ▼]

Debugging Simulink models presents a significant challenge in the embedded industry. In this work, we propose SimFL, a fault localization approach for Simulink models by combining statistical debugging and dynamic model slicing. Simulink models, being visual and hierarchical, have multiple outputs at different hierarchy levels. Given a set of outputs to observe for localizing faults, we generate test execution slices, for each test case and output, of the Simulink model. In order to further improve fault localization accuracy, we propose iSimFL, an iterative fault localization algorithm. At each iteration, iSimFL increases the set of observable outputs by including outputs at lower hierarchy levels, thus increasing the test oracle cost but offsetting it with significantly more precise fault localization. We utilize a heuristic stopping criterion to avoid unnecessary test oracle extension. We evaluate our work on three industrial Simulink models from Delphi Automotive. Our results show that, on average, SimFL ranks faulty blocks in the top 8.9% in the list of suspicious blocks. Further, we show that iSimFL significantly improves this percentage down to 4.4% by requiring engineers to observe only an average of five additional outputs at lower hierarchy levels on top of high-level model outputs. [less ▲]

Detailed reference viewed: 337 (61 UL)
Full Text
Peer Reviewed
See detailSeeding Strategies in Search-Based Unit Test Generation
Rojas, Jose Miguel; Fraser, Gordon; Arcuri, Andrea UL

in Software Testing, Verification and Reliability (2016)

Search-based techniques have been applied successfully to the task of generating unit tests for object-oriented software. However, as for any meta-heuristic search, the efficiency heavily depends on many ... [more ▼]

Search-based techniques have been applied successfully to the task of generating unit tests for object-oriented software. However, as for any meta-heuristic search, the efficiency heavily depends on many factors; seeding, which refers to the use of previous related knowledge to help solve the testing problem at hand, is one such factor that may strongly influence this efficiency. This paper investigates different seeding strategies for unit test generation, in particular seeding of numerical and string constants derived statically and dynamically, seeding of type information, and seeding of previously generated tests. To understand the effects of these seeding strategies, the results of a large empirical analysis carried out on a large collection of open source projects from the SF110 corpus and the Apache Commons repository are reported. These experiments show with strong statistical confidence that, even for a testing tool already able to achieve high coverage, the use of appropriate seeding strategies can further improve performance. [less ▲]

Detailed reference viewed: 179 (33 UL)
Full Text
Peer Reviewed
See detailCoverage-based regression test case selection, minimization and prioritization: a case study on an industrial system
Di Nardo, Daniel UL; Alshahwan, Nadia; Briand, Lionel UL et al

in Software Testing, Verification and Reliability (2015), 25(4), 371-396

Detailed reference viewed: 459 (51 UL)
Full Text
Peer Reviewed
See detailEmploying second-order mutation for isolating first-order equivalent mutants
Kintis, Marinos; Papadakis, Mike UL; Malevris, Nicos

in Software Testing, Verification and Reliability (2015), 25(5-7), 508-535

The equivalent mutant problem is a major hindrance to mutation testing. Being undecidable in general, it is only susceptible to partial solutions. In this paper, mutant classification is utilised for ... [more ▼]

The equivalent mutant problem is a major hindrance to mutation testing. Being undecidable in general, it is only susceptible to partial solutions. In this paper, mutant classification is utilised for isolating likely to be first-order equivalent mutants. A new classification technique, Isolating Equivalent Mutants (I-EQM), is introduced and empirically investigated. The proposed approach employs a dynamic execution scheme that integrates the impact on the program execution of first-order mutants with the impact on the output of second-order mutants. An experimental study, conducted using two independently created sets of manually classified mutants selected from real-world programs revalidates previously published results and provides evidence for the effectiveness of the proposed technique. Overall, the study shows that I-EQM substantially improves previous methods by retrieving a considerably higher number of killable mutants, thus, amplifying the quality of the testing process. [less ▲]

Detailed reference viewed: 154 (14 UL)
Full Text
Peer Reviewed
See detailMetallaxis-FL: mutation-based fault localization
Papadakis, Mike UL; Le Traon, Yves UL

in Software Testing, Verification and Reliability (2015), 25

Detailed reference viewed: 284 (19 UL)
Full Text
Peer Reviewed
See detailAre Concurrency Coverage Metrics Effective for Testing: A Comprehensive Empirical Investigation
Hong, Shin; Staats, Matthew UL; Ahn, Jaemin et al

in Software Testing, Verification and Reliability (2014)

Detailed reference viewed: 213 (11 UL)
Full Text
Peer Reviewed
See detailA Hitchhiker's guide to statistical tests for assessing randomized algorithms in software engineering
Arcuri, Andrea UL; Briand, Lionel UL

in Software Testing, Verification and Reliability (2012)

Randomized algorithms are widely used to address many types of software engineering problems, especially in the area of software verification and validation with a strong emphasis on test automation ... [more ▼]

Randomized algorithms are widely used to address many types of software engineering problems, especially in the area of software verification and validation with a strong emphasis on test automation. However, randomized algorithms are affected by chance and so require the use of appropriate statistical tests to be properly analysed in a sound manner. This paper features a systematic review regarding recent publications in 2009 and 2010 showing that, overall, empirical analyses involving randomized algorithms in software engineering tend to not properly account for the random nature of these algorithms. Many of the novel techniques presented clearly appear promising, but the lack of soundness in their empirical evaluations casts unfortunate doubts on their actual usefulness. In software engineering, although there are guidelines on how to carry out empirical analyses involving human subjects, those guidelines are not directly and fully applicable to randomized algorithms. Furthermore, many of the textbooks on statistical analysis are written from the viewpoints of social and natural sciences, which present different challenges from randomized algorithms. To address the questionable overall quality of the empirical analyses reported in the systematic review, this paper provides guidelines on how to carry out and properly analyse randomized algorithms applied to solve software engineering tasks, with a particular focus on software testing, which is by far the most frequent application area of randomized algorithms within software engineering. [less ▲]

Detailed reference viewed: 305 (19 UL)