References of "Campos, José"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailAn Empirical Evaluation of Evolutionary Algorithms for Unit Test Suite Generation
Campos, Jose; Ge, Yan; Albunian, Nasser et al

in Information and Software Technology (2018), 104(December), 207-235

Context: Evolutionary algorithms have been shown to be e ective at generating unit test suites optimised for code coverage. While many speci c aspects of these algorithms have been evaluated in detail (e ... [more ▼]

Context: Evolutionary algorithms have been shown to be e ective at generating unit test suites optimised for code coverage. While many speci c aspects of these algorithms have been evaluated in detail (e.g., test length and di erent kinds of techniques aimed at improving performance, like seeding), the in uence of the choice of evolutionary algorithm has to date seen less attention in the literature. Objective: Since it is theoretically impossible to design an algorithm that is the best on all possible problems, a common approach in software engineering problems is to rst try the most common algorithm, a Genetic Algorithm, and only afterwards try to re ne it or compare it with other algorithms to see if any of them is more suited for the addressed problem. The objective of this paper is to perform this analysis, in order to shed light on the in uence of the search algorithm applied for unit test generation. Method: We empirically evaluate thirteen di erent evolutionary algorithms and two random approaches on a selection of non-trivial open source classes. All algorithms are implemented in the EvoSuite test generation tool, which includes recent optimisations such as the use of an archive during the search and optimisation for multiple coverage criteria. Results: Our study shows that the use of a test archive makes evolutionary algorithms clearly better than random testing, and it con rms that the DynaMOSA many-objective search algorithm is the most e ective algorithm for unit test generation. Conclusions: Our results show that the choice of algorithm can have a substantial in uence on the performance of whole test suite optimisation. Although we can make a recommendation on which algorithm to use in practice, no algorithm is clearly superior in all cases, suggesting future work on improved search algorithms for unit test generation [less ▲]

Detailed reference viewed: 159 (55 UL)
Full Text
Peer Reviewed
See detailAn Empirical Evaluation of Evolutionary Algorithms for Test Suite Generation
Campos, Jose; Ge, Yan; Fraser, Gordon et al

in Symposium on Search-Based Software Engineering (SSBSE) (2017)

Evolutionary algorithms have been shown to be effective at generating unit test suites optimised for code coverage. While many aspects of these algorithms have been evaluated in detail (e.g., test length ... [more ▼]

Evolutionary algorithms have been shown to be effective at generating unit test suites optimised for code coverage. While many aspects of these algorithms have been evaluated in detail (e.g., test length and different kinds of techniques aimed at improving performance, like seeding), the influence of the specific algorithms has to date seen less attention in the literature. As it is theoretically impossible to design an algorithm that is best on all possible problems, a common approach in software engineering problems is to first try a Genetic Algorithm, and only afterwards try to refine it or compare it with other algorithms to see if any of them is more suited for the addressed problem. This is particularly important in test generation, since recent work suggests that random search may in practice be equally effective, whereas the reformulation as a many-objective problem seems to be more effective. To shed light on the influence of the search algorithms, we empirically evaluate six different algorithms on a selection of non-trivial open source classes. Our study shows that the use of a test archive makes evolutionary algorithms clearly better than random testing, and it confirms that the many-objective search is the most effective. [less ▲]

Detailed reference viewed: 182 (28 UL)
Full Text
Peer Reviewed
See detailEVOSUITE at the SBST 2017 Tool Competition
Fraser, Gordon; Rojas, José Miguel; Campos, José et al

in IEEE/ACM International Workshop on Search-Based Software Testing (SBST) (2017)

EVOSUITE is a search-based tool that automatically generates unit tests for Java code. This paper summarises the results and experiences of EVOSUITE’s participation at the fifth unit testing competition ... [more ▼]

EVOSUITE is a search-based tool that automatically generates unit tests for Java code. This paper summarises the results and experiences of EVOSUITE’s participation at the fifth unit testing competition at SBST 2017, where EVOSUITE achieved the highest overall score. [less ▲]

Detailed reference viewed: 108 (7 UL)
Full Text
Peer Reviewed
See detailContinuous Test Generation on Guava
Campos, Jose; Fraser, Gordon; Arcuri, Andrea UL et al

in Symposium on Search-Based Software Engineering (SSBSE) (2015)

Search-based testing can be applied to automatically gener- ate unit tests that achieve high levels of code coverage on object-oriented classes. However, test generation takes time, in particular if ... [more ▼]

Search-based testing can be applied to automatically gener- ate unit tests that achieve high levels of code coverage on object-oriented classes. However, test generation takes time, in particular if projects consist of many classes, like in the case of the Guava library. To allow search-based test generation to scale up and to integrate it better into software development, continuous test generation applies test generation incrementally during continuous integration. In this paper, we report on the application of continuous test generation with EvoSuite at the SS- BSE'15 challenge on the Guava library. Our results show that continuous test generation reduces the time spent on automated test generation by 96%, while increasing code coverage by 13.9% on average. [less ▲]

Detailed reference viewed: 82 (9 UL)
Full Text
Peer Reviewed
See detailCombining Multiple Coverage Criteria in Search-Based Unit Test Generation
Rojas, Miguel; Campos, Jose; Vivanti, Mattia et al

in Symposium on Search-Based Software Engineering (SSBSE) (2015)

Automated test generation techniques typically aim at max- imising coverage of well-established structural criteria such as statement or branch coverage. In practice, generating tests only for one speci c ... [more ▼]

Automated test generation techniques typically aim at max- imising coverage of well-established structural criteria such as statement or branch coverage. In practice, generating tests only for one speci c criterion may not be su cient when testing object oriented classes, as standard structural coverage criteria do not fully capture the properties developers may desire of their unit test suites. For example, covering a large number of statements could be easily achieved by just calling the main method of a class; yet, a good unit test suite would consist of smaller unit tests invoking individual methods, and checking return values and states with test assertions. There are several di erent properties that test suites should exhibit, and a search-based test generator could easily be extended with additional tness functions to capture these properties. However, does search-based testing scale to combinations of multiple cri- teria, and what is the e ect on the size and coverage of the resulting test suites? To answer these questions, we extended the EvoSuite unit test generation tool to support combinations of multiple test criteria, de ned and implemented several di erent criteria, and applied combinations of criteria to a sample of 650 open source Java classes. Our experiments suggest that optimising for several criteria at the same time is feasible without increasing computational costs: When combining nine di erent criteria, we observed an average decrease of only 0.4% for the constituent coverage criteria, while the test suites may grow up to 70%. [less ▲]

Detailed reference viewed: 110 (8 UL)