References of "Rojas, Jose Miguel"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailEVOSUITE at the SBST 2017 Tool Competition
Fraser, Gordon; Rojas, José Miguel; Campos, José et al

in IEEE/ACM International Workshop on Search-Based Software Testing (SBST) (2017)

EVOSUITE is a search-based tool that automatically generates unit tests for Java code. This paper summarises the results and experiences of EVOSUITE’s participation at the fifth unit testing competition ... [more ▼]

EVOSUITE is a search-based tool that automatically generates unit tests for Java code. This paper summarises the results and experiences of EVOSUITE’s participation at the fifth unit testing competition at SBST 2017, where EVOSUITE achieved the highest overall score. [less ▲]

Detailed reference viewed: 72 (7 UL)
Full Text
Peer Reviewed
See detailA Detailed Investigation of the Effectiveness of Whole Test Suite Generation
Rojas, José Miguel; Vivanti, Mattia; Arcuri, Andrea UL et al

in Empirical Software Engineering (2016)

A common application of search-based software testing is to generate test cases for all goals defined by a coverage criterion (e.g., lines, branches, mutants). Rather than generating one test case at a ... [more ▼]

A common application of search-based software testing is to generate test cases for all goals defined by a coverage criterion (e.g., lines, branches, mutants). Rather than generating one test case at a time for each of these goals individually, whole test suite generation optimizes entire test suites towards satisfying all goals at the same time. There is evidence that the overall coverage achieved with this approach is superior to that of targeting individual coverage goals. Nevertheless, there remains some uncertainty on (a) whether the results generalize beyond branch coverage, (b) whether the whole test suite approach might be inferior to a more focused search for some particular coverage goals, and (c) whether generating whole test suites could be optimized by only targeting coverage goals not already covered. In this paper, we perform an in-depth analysis to study these questions. An empirical study on 100 Java classes using three different coverage criteria reveals that indeed there are some testing goals that are only covered by the traditional approach, although their number is only very small in comparison with those which are exclusively covered by the whole test suite approach. We find that keeping an archive of already covered goal with corresponding tests and focusing the search on uncovered goals overcomes this small drawback on larger classes, leading to an improved overall effectiveness of whole test suite generation. [less ▲]

Detailed reference viewed: 116 (17 UL)
Full Text
Peer Reviewed
See detailSeeding Strategies in Search-Based Unit Test Generation
Rojas, Jose Miguel; Fraser, Gordon; Arcuri, Andrea UL

in SOFTWARE TESTING, VERIFICATION AND RELIABILITY (STVR) (2016)

Search-based techniques have been applied successfully to the task of generating unit tests for object-oriented software. However, as for any meta-heuristic search, the efficiency heavily depends on many ... [more ▼]

Search-based techniques have been applied successfully to the task of generating unit tests for object-oriented software. However, as for any meta-heuristic search, the efficiency heavily depends on many factors; seeding, which refers to the use of previous related knowledge to help solve the testing problem at hand, is one such factor that may strongly influence this efficiency. This paper investigates different seeding strategies for unit test generation, in particular seeding of numerical and string constants derived statically and dynamically, seeding of type information, and seeding of previously generated tests. To understand the effects of these seeding strategies, the results of a large empirical analysis carried out on a large collection of open source projects from the SF110 corpus and the Apache Commons repository are reported. These experiments show with strong statistical confidence that, even for a testing tool already able to achieve high coverage, the use of appropriate seeding strategies can further improve performance. [less ▲]

Detailed reference viewed: 121 (32 UL)
Full Text
Peer Reviewed
See detailDo Automatically Generated Unit Tests Find Real Faults? An Empirical Study of Effectiveness and Challenges
Shamshiri, Sina; Just, Rene; Rojas, Jose Miguel et al

in Proceedings of the 30th IEEE/ACM International Conference on Automated Software Engineering (ASE) (2015)

Rather than tediously writing unit tests manually, tools can be used to generate them automatically — sometimes even resulting in higher code coverage than manual testing. But how good are these tests at ... [more ▼]

Rather than tediously writing unit tests manually, tools can be used to generate them automatically — sometimes even resulting in higher code coverage than manual testing. But how good are these tests at actually finding faults? To answer this question, we applied three state-of-the art unit test generation tools for Java (Randoop, EvoSuite, and Agitar) to the 357 faults in the Defects4J dataset and investigated how well the generated test suites perform at detecting faults. Although 55.7% of the faults were found by automatically generated tests overall, only 19.9% of the test suites generated in our experiments actually detected a fault. By studying the performance and the problems of the individual tools and their tests, we derive insights to support the development of automated unit test generators, in order to increase the fault detection rate in the future. These include 1) improving coverage obtained so that defective statements are actually executed in the first instance, 2) techniques for propagating faults to the output, coupled with the generation of more sensitive assertions for detecting them, and 3) better simulation of the execution environment to detecting faults that are dependent on external factors, for example the date and time. [less ▲]

Detailed reference viewed: 132 (7 UL)
Full Text
Peer Reviewed
See detailAutomated Unit Test Generation during Software Development: A Controlled Experiment and Think-Aloud Observations
Rojas, José Miguel; Fraser, Gordon; Arcuri, Andrea UL

in ACM International Symposium on Software Testing and Analysis (ISSTA), 2015 (2015)

Automated unit test generation tools can produce tests that are superior to manually written ones in terms of code coverage, but are these tests helpful to developers while they are writing code? A ... [more ▼]

Automated unit test generation tools can produce tests that are superior to manually written ones in terms of code coverage, but are these tests helpful to developers while they are writing code? A developer would first need to know when and how to apply such a tool, and would then need to understand the resulting tests in order to provide test oracles and to diagnose and fix any faults that the tests reveal. Considering all this, does automatically generating unit tests provide any benefit over simply writing unit tests manually? We empirically investigated the effects of using an automated unit test generation tool (EVOSUITE) during development. A controlled experiment with 41 students shows that using EVOSUITE leads to an average branch coverage increase of +13%, and 36% less time is spent on testing compared to writing unit tests manually. However, there is no clear effect on the quality of the implementations, as it depends on how the test generation tool and the generated tests are used. In-depth analysis, using five think-aloud observations with professional programmers, confirms the necessity to increase the usability of automated unit test generation tools, to integrate them better during software development, and to educate software developers on how to best use those tools. [less ▲]

Detailed reference viewed: 88 (19 UL)