References of "Arcuri, Andrea 4000A815"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailSearch-based Multi-Vulnerability Testing of XML Injections in Web Applications
Jan, Sadeeq UL; Panichella, Annibale UL; Arcuri, Andrea UL et al

in Empirical Software Engineering (2019), 24(6), 36963729

Modern web applications often interact with internal web services, which are not directly accessible to users. However, malicious user inputs can be used to exploit security vulnerabilities in web ... [more ▼]

Modern web applications often interact with internal web services, which are not directly accessible to users. However, malicious user inputs can be used to exploit security vulnerabilities in web services through the application front-ends. Therefore, testing techniques have been proposed to reveal security flaws in the interactions with back-end web services, e.g., XML Injections (XMLi). Given a potentially malicious message between a web application and web services, search-based techniques have been used to find input data to mislead the web application into sending such a message, possibly compromising the target web service. However, state-of-the-art techniques focus on (search for) one single malicious message at a time. Since, in practice, there can be many different kinds of malicious messages, with only a few of them which can possibly be generated by a given front-end, searching for one single message at a time is ineffective and may not scale. To overcome these limitations, we propose a novel co-evolutionary algorithm (COMIX) that is tailored to our problem and uncover multiple vulnerabilities at the same time. Our experiments show that COMIX outperforms a single-target search approach for XMLi and other multi-target search algorithms originally defined for white-box unit testing. [less ▲]

Detailed reference viewed: 334 (39 UL)
Full Text
Peer Reviewed
See detailAutomatic Generation of Tests to Exploit XML Injection Vulnerabilities in Web Applications
Jan, Sadeeq UL; Panichella, Annibale UL; Arcuri, Andrea UL et al

in IEEE Transactions on Software Engineering (2019), 45(4), 335-362

Modern enterprise systems can be composed of many web services (e.g., SOAP and RESTful). Users of such systems might not have direct access to those services, and rather interact with them through a ... [more ▼]

Modern enterprise systems can be composed of many web services (e.g., SOAP and RESTful). Users of such systems might not have direct access to those services, and rather interact with them through a single-entry point which provides a GUI (e.g., a web page or a mobile app). Although the interactions with such entry point might be secure, a hacker could trick such systems to send malicious inputs to those internal web services. A typical example is XML injection targeting SOAP communications. Previous work has shown that it is possible to automatically generate such kind of attacks using search-based techniques. In this paper, we improve upon previous results by providing more efficient techniques to generate such attacks. In particular, we investigate four different algorithms and two different fitness functions. A large empirical study, involving also two industrial systems, shows that our technique is effective at automatically generating XML injection attacks. [less ▲]

Detailed reference viewed: 557 (105 UL)
Full Text
Peer Reviewed
See detailTest Suite Generation with the Many Independent Objective (MIO) Algorithm
Arcuri, Andrea UL

in Information and Software Technology (2018), 104(December), 195-206

Context: Automatically generating test suites is intrinsically a multi-objective problem, as any of the testing targets (e.g, statements to execute or mutants to kill) is an objective on its own. Test ... [more ▼]

Context: Automatically generating test suites is intrinsically a multi-objective problem, as any of the testing targets (e.g, statements to execute or mutants to kill) is an objective on its own. Test suite generation has peculiarities that are quite different from other more regular optimisation problems. For example, given an existing test suite, one can add more tests to cover the remaining objectives. One would like the smallest number of small tests to cover as many objectives as possible, but that is a secondary goal compared to covering those targets in the first place. Furthermore, the amount of objectives in software testing can quickly become unmanageable, in the order of (tens/hundreds of) thousands, especially for system testing of industrial size systems. Objective: To overcome these issues, different techniques have been proposed, like for example the Whole Test Suite (WTS) approach and the Many-Objective Sorting Algorithm (MOSA). However, those techniques might not scale well to very large numbers of objectives and limited search budgets (a typical case in system testing). In this paper, we propose a novel algorithm, called Many Independent Objective (MIO) algorithm. This algorithm is designed and tailored based on the specific properties of test suite generation. Method: An empirical study was carried out for test suite generation on a series of artificial examples and seven RESTful API web services. The \evo system test generation tool was used, where MIO, MOSA, WTS and random search were compared. Results: The presented MIO algorithm resulted having the best overall performance, but was not the best on all problems. Conclusion: The novel presented MIO algorithm is a step forward in the automation of test suite generation for system testing. However, there are still properties of system testing that can be exploited to achieve even better results. [less ▲]

Detailed reference viewed: 182 (25 UL)
Full Text
Peer Reviewed
See detailAn Empirical Evaluation of Evolutionary Algorithms for Unit Test Suite Generation
Campos, Jose; Ge, Yan; Albunian, Nasser et al

in Information and Software Technology (2018), 104(December), 207-235

Context: Evolutionary algorithms have been shown to be e ective at generating unit test suites optimised for code coverage. While many speci c aspects of these algorithms have been evaluated in detail (e ... [more ▼]

Context: Evolutionary algorithms have been shown to be e ective at generating unit test suites optimised for code coverage. While many speci c aspects of these algorithms have been evaluated in detail (e.g., test length and di erent kinds of techniques aimed at improving performance, like seeding), the in uence of the choice of evolutionary algorithm has to date seen less attention in the literature. Objective: Since it is theoretically impossible to design an algorithm that is the best on all possible problems, a common approach in software engineering problems is to rst try the most common algorithm, a Genetic Algorithm, and only afterwards try to re ne it or compare it with other algorithms to see if any of them is more suited for the addressed problem. The objective of this paper is to perform this analysis, in order to shed light on the in uence of the search algorithm applied for unit test generation. Method: We empirically evaluate thirteen di erent evolutionary algorithms and two random approaches on a selection of non-trivial open source classes. All algorithms are implemented in the EvoSuite test generation tool, which includes recent optimisations such as the use of an archive during the search and optimisation for multiple coverage criteria. Results: Our study shows that the use of a test archive makes evolutionary algorithms clearly better than random testing, and it con rms that the DynaMOSA many-objective search algorithm is the most e ective algorithm for unit test generation. Conclusions: Our results show that the choice of algorithm can have a substantial in uence on the performance of whole test suite optimisation. Although we can make a recommendation on which algorithm to use in practice, no algorithm is clearly superior in all cases, suggesting future work on improved search algorithms for unit test generation [less ▲]

Detailed reference viewed: 246 (63 UL)
Full Text
Peer Reviewed
See detailRandom or Evolutionary Search for Object-Oriented Test Suite Generation?
Shamshiri; Rojas; Gazzola et al

in Software Testing, Verification and Reliability (2018), 28(4), 1660

An important aim in software testing is constructing a test suite with high structural code coverage – that is, ensuring that most if not all of the code under test has been executed by the test cases ... [more ▼]

An important aim in software testing is constructing a test suite with high structural code coverage – that is, ensuring that most if not all of the code under test has been executed by the test cases comprising the test suite. Several search-based techniques have proved successful at automatically generating tests that achieve high coverage. However, despite the well-established arguments behind using evolutionary search algorithms (e.g., genetic algorithms) in preference to random search, it remains an open question whether the benefits can actually be observed in practice when generating unit test suites for object-oriented classes. In this paper, we report an empirical study on the effects of using evolutionary algorithms (including a genetic algorithm and chemical reaction optimization) to generate test suites, compared with generating test suites incrementally with random search. We apply the EVOSUITE unit test suite generator to 1,000 classes randomly selected from the SF110 corpus of open source projects. Surprisingly, the results show that the difference is much smaller than one might expect: While evolutionary search covers more branches of the type where standard fitness functions provide guidance, we observed that, in practice, the vast majority of branches do not provide any guidance to the search. These results suggest that, although evolutionary algorithms are more effective at covering complex branches, a random search may suffice to achieve high coverage of most object-oriented classes. [less ▲]

Detailed reference viewed: 157 (27 UL)
Full Text
Peer Reviewed
See detailAn Experience Report On Applying Software Testing Academic Results In Industry: We Need Usable Automated Test Generation
Arcuri, Andrea UL

in Empirical Software Engineering (2018), 23(4),

What is the impact of software engineering research on current practices in industry? In this paper, I report on my direct experience as a PhD/post-doc working in software engineering research projects ... [more ▼]

What is the impact of software engineering research on current practices in industry? In this paper, I report on my direct experience as a PhD/post-doc working in software engineering research projects, and then spending the following five years as an engineer in two different companies (the first one being the same I worked in collaboration with during my post-doc). Given a background in software engineering research, what cutting-edge techniques and tools from academia did I use in my daily work when developing and testing the systems of these companies? Regarding validation and verification (my main area of research), the answer is rather short: as far as I can tell, only FindBugs. In this paper, I report on why this was the case, and discuss all the challenging, complex open problems we face in industry and which somehow are ``neglected'' in the academic circles. In particular, I will first discuss what actual tools I could use in my daily work, such as JaCoCo and Selenium. Then, I will discuss the main open problems I faced, particularly related to environment simulators, unit and web testing. After that, popular topics in academia are presented, such as UML, regression and mutation testing. Their lack of impact on the type of projects I worked on in industry is then discussed. Finally, from this industrial experience, I provide my opinions about how this situation can be improved, in particular related to how academics are evaluated, and advocate for a greater involvement into open-source projects. [less ▲]

Detailed reference viewed: 264 (20 UL)
Full Text
See detailEvaluating Search-Based Techniques With Statistical Tests
Arcuri, Andrea UL

in The Search-Based Software Testing (SBST) Workshop (2018)

This tutorial covers the basics of how to use statistical tests to evaluate and compare search-algorithms, in particular when applied on software engineering problems. Search-algorithms like Hill Climbing ... [more ▼]

This tutorial covers the basics of how to use statistical tests to evaluate and compare search-algorithms, in particular when applied on software engineering problems. Search-algorithms like Hill Climbing and Genetic Algorithms are randomised. Running such randomised algorithms twice on the same problem can give different results. It is hence important to run such algorithms multiple times to collect average results, and avoid so publishing wrong conclusions that were based on just luck. However, there is the question of how often such runs should be repeated. Given a set of n repeated experiments, is such n large enough to draw sound conclusions? Or should had more experiments been run? Statistical tests like the Wilcoxon-Mann-Whitney U-test can be used to answer these important questions. [less ▲]

Detailed reference viewed: 141 (6 UL)
Full Text
Peer Reviewed
See detailRecent Trends in Software Testing Education: A Systematic Literature Review
Lauvås, Per; Arcuri, Andrea UL

in UDIT (The Norwegian Conference on Didactics in IT education) (2018)

Testing is a critical aspect of software development. Far too often software is released with critical faults. However, testing is often considered tedious and boring. Unfortunately, many graduates might ... [more ▼]

Testing is a critical aspect of software development. Far too often software is released with critical faults. However, testing is often considered tedious and boring. Unfortunately, many graduates might join the work force without having had any education in software testing, which exacerbates the problem even further. Therefore, teaching software testing as part of a university degree in software engineering and is very important. But it is an open challenge how to teach software testing in an effective way that can successfully motivate students. In this paper, we have carried out a systematic literature review on the topic of teaching software testing. We analysed and reviewed 30 papers that were published between 2013 and 2017. The review points out to a few different trends, like the use of gamification to make the teaching of software testing less tedious. [less ▲]

Detailed reference viewed: 122 (3 UL)
Full Text
Peer Reviewed
See detailEvoSuite at the SBST 2018 Tool Competition
Fraser, Gordon; Rojas, Jose; Arcuri, Andrea UL

in 2018 ACM/IEEE 11th International Workshop on Search-Based Software Testing (2018)

EvoSuite is a search-based tool that automatically generates executable unit tests for Java code (JUnit tests). This paper summarises the results and experiences of EvoSuite’s participation at the sixth ... [more ▼]

EvoSuite is a search-based tool that automatically generates executable unit tests for Java code (JUnit tests). This paper summarises the results and experiences of EvoSuite’s participation at the sixth unit testing competition at SBST 2018, where EvoSuite achieved the highest overall score (687 points) for the fifth time in six editions of the competition. [less ▲]

Detailed reference viewed: 115 (6 UL)
Full Text
Peer Reviewed
See detailEvoMaster: Evolutionary Multi-context Automated System Test Generation
Arcuri, Andrea UL

in IEEE Conference on Software Testing, Validation and Verification (2018)

This paper presents EVOMASTER, an open-source tool that is able to automatically generate system level test cases using evolutionary algorithms. Currently, EVOMASTER targets RESTful web services running ... [more ▼]

This paper presents EVOMASTER, an open-source tool that is able to automatically generate system level test cases using evolutionary algorithms. Currently, EVOMASTER targets RESTful web services running on JVM technology, and has been used to find several faults in existing open-source projects. We discuss some of the architectural decisions made for its implementation, and future work. [less ▲]

Detailed reference viewed: 182 (1 UL)
Full Text
Peer Reviewed
See detailMany Independent Objective (MIO) Algorithm for Test Suite Generation
Arcuri, Andrea UL

in Symposium on Search-Based Software Engineering (SSBSE) (2017)

Automatically generating test suites is intrinsically a multi- objective problem, as any of the testing targets (e.g, statements to exe- cute or mutants to kill) is an objective on its own. Test suite ... [more ▼]

Automatically generating test suites is intrinsically a multi- objective problem, as any of the testing targets (e.g, statements to exe- cute or mutants to kill) is an objective on its own. Test suite generation has peculiarities that are quite di erent from other more regular optimi- sation problems. For example, given an existing test suite, one can add more tests to cover the remaining objectives. One would like the smallest number of small tests to cover as many objectives as possible, but that is a secondary goal compared to covering those targets in the rst place. Furthermore, the amount of objectives in software testing can quickly become unmanageable, in the order of (tens/hundreds of) thousands, es- pecially for system testing of industrial size systems. Traditional multi- objective optimisation algorithms can already start to struggle with just four or ve objectives to optimize. To overcome these issues, di erent techniques have been proposed, like for example the Whole Test Suite (WTS) approach and the Many-Objective Sorting Algorithm (MOSA). However, those techniques might not scale well to very large numbers of objectives and limited search budgets (a typical case in system test- ing). In this paper, we propose a novel algorithm, called Many Indepen- dent Objective (MIO) algorithm. This algorithm is designed and tailored based on the speci c properties of test suite generation. An empirical study, on a set of arti cial and actual software, shows that the MIO al- gorithm can achieve higher coverage compared to WTS and MOSA, as it can better exploit the peculiarities of test suite generation. [less ▲]

Detailed reference viewed: 288 (41 UL)
Full Text
Peer Reviewed
See detailAn Empirical Evaluation of Evolutionary Algorithms for Test Suite Generation
Campos, Jose; Ge, Yan; Fraser, Gordon et al

in Symposium on Search-Based Software Engineering (SSBSE) (2017)

Evolutionary algorithms have been shown to be effective at generating unit test suites optimised for code coverage. While many aspects of these algorithms have been evaluated in detail (e.g., test length ... [more ▼]

Evolutionary algorithms have been shown to be effective at generating unit test suites optimised for code coverage. While many aspects of these algorithms have been evaluated in detail (e.g., test length and different kinds of techniques aimed at improving performance, like seeding), the influence of the specific algorithms has to date seen less attention in the literature. As it is theoretically impossible to design an algorithm that is best on all possible problems, a common approach in software engineering problems is to first try a Genetic Algorithm, and only afterwards try to refine it or compare it with other algorithms to see if any of them is more suited for the addressed problem. This is particularly important in test generation, since recent work suggests that random search may in practice be equally effective, whereas the reformulation as a many-objective problem seems to be more effective. To shed light on the influence of the search algorithms, we empirically evaluate six different algorithms on a selection of non-trivial open source classes. Our study shows that the use of a test archive makes evolutionary algorithms clearly better than random testing, and it confirms that the many-objective search is the most effective. [less ▲]

Detailed reference viewed: 221 (28 UL)
Full Text
Peer Reviewed
See detailPrivate API Access and Functional Mocking in Automated Unit Test Generation
Arcuri, Andrea UL; Fraser, Gordon; Just, Rene

in IEEE International Conference on Software Testing, Verification and Validation (ICST) (2017)

Not all object oriented code is easily testable: Dependency objects might be difficult or even impossible to instantiate, and object-oriented encapsulation makes testing potentially simple code difficult ... [more ▼]

Not all object oriented code is easily testable: Dependency objects might be difficult or even impossible to instantiate, and object-oriented encapsulation makes testing potentially simple code difficult if it cannot easily be accessed. When this happens, then developers can resort to mock objects that simulate the complex dependencies, or circumvent objectoriented encapsulation and access private APIs directly through the use of, for example, Java reflection. Can automated unit test generation benefit from these techniques as well? In this paper we investigate this question by extending the EvoSuite unit test generation tool with the ability to directly access private APIs and to create mock objects using the popular Mockito framework. However, care needs to be taken that this does not impact the usefulness of the generated tests: For example, a test accessing a private field could later fail if that field is renamed, even if that renaming is part of a semantics-preserving refactoring. Such a failure would not be revealing a true regression bug, but is a false positive, which wastes the developer’s time for investigating and fixing the test. Our experiments on the SF110 and Defects4J benchmarks confirm the anticipated improvements in terms of code coverage and bug finding, but also confirm the existence of false positives. However, by ensuring the test generator only uses mocking and reflection if there is no other way to reach some part of the code, their number remains small. [less ▲]

Detailed reference viewed: 169 (9 UL)
Full Text
Peer Reviewed
See detailGenerating Unit Tests with Structured System Interactions
Havrikov, Nikolas; Gambi, Alessio; Zeller, Andreas et al

in IEEE/ACM International Workshop on Automation of Software Test (AST) (2017)

There is a large body of work in the literature about automatic unit tests generation, and many successful results have been reported so far. However, current approaches target library classes, but not ... [more ▼]

There is a large body of work in the literature about automatic unit tests generation, and many successful results have been reported so far. However, current approaches target library classes, but not full applications. A major obstacle for testing full applications is that they interact with the environment. For example, they establish connections to remote servers. Thoroughly testing such applications requires tests that completely control the interactions between the application and its environment. Recent techniques based on mocking enable the generation of tests which include environment interactions; however, generating the right type of interactions is still an open problem. In this paper, we describe a novel approach which addresses this problem by enhancing search-based testing with complex test data generation. Experiments on an artificial system show that the proposed approach can generate effective unit tests. Compared with current techniques based on mocking, we generate more robust unit tests which achieve higher coverage and are, arguably, easier to read and understand. [less ▲]

Detailed reference viewed: 112 (4 UL)
Full Text
Peer Reviewed
See detailEVOSUITE at the SBST 2017 Tool Competition
Fraser, Gordon; Rojas, José Miguel; Campos, José et al

in IEEE/ACM International Workshop on Search-Based Software Testing (SBST) (2017)

EVOSUITE is a search-based tool that automatically generates unit tests for Java code. This paper summarises the results and experiences of EVOSUITE’s participation at the fifth unit testing competition ... [more ▼]

EVOSUITE is a search-based tool that automatically generates unit tests for Java code. This paper summarises the results and experiences of EVOSUITE’s participation at the fifth unit testing competition at SBST 2017, where EVOSUITE achieved the highest overall score. [less ▲]

Detailed reference viewed: 139 (7 UL)
Full Text
Peer Reviewed
See detailAn Industrial Evaluation of Unit Test Generation: Finding Real Faults in a Financial Application
Almasi, Moein; Hemmati, Hadi; Fraser, Gordon et al

in ACM/IEEE International Conference on Software Engineering (ICSE) (2017)

Automated unit test generation has been extensively studied in the literature in recent years. Previous studies on open source systems have shown that test generation tools are quite effective at ... [more ▼]

Automated unit test generation has been extensively studied in the literature in recent years. Previous studies on open source systems have shown that test generation tools are quite effective at detecting faults, but how effective and applicable are they in an industrial application? In this paper, we investigate this question using a life insurance and pension products calculator engine owned by SEB Life & Pension Holding AB Riga Branch. To study fault-finding effectiveness, we extracted 25 real faults from the version history of this software project, and applied two up-to-date unit test generation tools for Java, EvoSuite and Randoop, which implement search-based and feedback-directed random test generation, respectively. Automatically generated test suites detected up to 56.40% (EvoSuite) and 38.00% (Randoop) of these faults. The analysis of our results demonstrates challenges that need to be addressed in order to improve fault detection in test generation tools. In particular, classification of the undetected faults shows that 97.62% of them depend on either “specific primitive values” (50.00%) or the construction of “complex state configuration of objects” (47.62%). To study applicability, we surveyed the developers of the application under test on their experience and opinions about the test generation tools and the generated test cases. This leads to insights on requirements for academic prototypes for successful technology transfer from academic research to industrial practice, such as a need to integrate with popular build tools, and to improve the readability of the generated tests. [less ▲]

Detailed reference viewed: 433 (13 UL)
Full Text
Peer Reviewed
See detailRESTful API Automated Test Case Generation
Arcuri, Andrea UL

in IEEE International Conference on Software Quality, Reliability & Security (QRS) (2017)

Nowadays, web services play a major role in the development of enterprise applications. Many such applications are now developed using a service-oriented architecture (SOA), where microservices is one of ... [more ▼]

Nowadays, web services play a major role in the development of enterprise applications. Many such applications are now developed using a service-oriented architecture (SOA), where microservices is one of its most popular kind. A RESTful web service will provide data via an API over the network using HTTP, possibly interacting with databases and other web services. Testing a RESTful API poses challenges, as inputs/outputs are sequences of HTTP requests/responses to a remote server. Many approaches in the literature do black-box testing, as the tested API is a remote service whose code is not available. In this paper, we consider testing from the point of view of the developers, which do have full access to the code that they are writing. Therefore, we propose a fully automated white-box testing approach, where test cases are automatically generated using an evolutionary algorithm. Tests are rewarded based on code coverage and fault finding metrics. We implemented our technique in a tool called EVOMASTER, which is open-source. Experiments on two open-source, yet non-trivial RESTful services and an industrial one, do show that our novel technique did automatically find 38 real bugs in those applications. However, obtained code coverage is lower than the one achieved by the manually written test suites already existing in those services. Research directions on how to further improve such approach are therefore discussed. [less ▲]

Detailed reference viewed: 232 (11 UL)
Full Text
Peer Reviewed
See detailSeeding Strategies in Search-Based Unit Test Generation
Rojas, Jose Miguel; Fraser, Gordon; Arcuri, Andrea UL

in Software Testing, Verification and Reliability (2016)

Search-based techniques have been applied successfully to the task of generating unit tests for object-oriented software. However, as for any meta-heuristic search, the efficiency heavily depends on many ... [more ▼]

Search-based techniques have been applied successfully to the task of generating unit tests for object-oriented software. However, as for any meta-heuristic search, the efficiency heavily depends on many factors; seeding, which refers to the use of previous related knowledge to help solve the testing problem at hand, is one such factor that may strongly influence this efficiency. This paper investigates different seeding strategies for unit test generation, in particular seeding of numerical and string constants derived statically and dynamically, seeding of type information, and seeding of previously generated tests. To understand the effects of these seeding strategies, the results of a large empirical analysis carried out on a large collection of open source projects from the SF110 corpus and the Apache Commons repository are reported. These experiments show with strong statistical confidence that, even for a testing tool already able to achieve high coverage, the use of appropriate seeding strategies can further improve performance. [less ▲]

Detailed reference viewed: 181 (33 UL)
Full Text
Peer Reviewed
See detailA Detailed Investigation of the Effectiveness of Whole Test Suite Generation
Rojas, José Miguel; Vivanti, Mattia; Arcuri, Andrea UL et al

in Empirical Software Engineering (2016)

A common application of search-based software testing is to generate test cases for all goals defined by a coverage criterion (e.g., lines, branches, mutants). Rather than generating one test case at a ... [more ▼]

A common application of search-based software testing is to generate test cases for all goals defined by a coverage criterion (e.g., lines, branches, mutants). Rather than generating one test case at a time for each of these goals individually, whole test suite generation optimizes entire test suites towards satisfying all goals at the same time. There is evidence that the overall coverage achieved with this approach is superior to that of targeting individual coverage goals. Nevertheless, there remains some uncertainty on (a) whether the results generalize beyond branch coverage, (b) whether the whole test suite approach might be inferior to a more focused search for some particular coverage goals, and (c) whether generating whole test suites could be optimized by only targeting coverage goals not already covered. In this paper, we perform an in-depth analysis to study these questions. An empirical study on 100 Java classes using three different coverage criteria reveals that indeed there are some testing goals that are only covered by the traditional approach, although their number is only very small in comparison with those which are exclusively covered by the whole test suite approach. We find that keeping an archive of already covered goal with corresponding tests and focusing the search on uncovered goals overcomes this small drawback on larger classes, leading to an improved overall effectiveness of whole test suite generation. [less ▲]

Detailed reference viewed: 185 (17 UL)
Full Text
Peer Reviewed
See detailEvoSuite at the SBST 2016 Tool Competition
Gordon, Fraser; Arcuri, Andrea UL

in The 9th International Workshop on SEARCH-BASED SOFTWARE TESTING (SBST) (2016)

EVOSUITE is a search-based tool that automatically generates unit tests for Java code. This paper summarizes the results and experiences of EVOSUITE’s participation at the fourth unit testing competition ... [more ▼]

EVOSUITE is a search-based tool that automatically generates unit tests for Java code. This paper summarizes the results and experiences of EVOSUITE’s participation at the fourth unit testing competition at SBST 2016, where EVOSUITE achieved the highest overall score. [less ▲]

Detailed reference viewed: 141 (3 UL)