References of "Kintis, Marinos 50024440"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailOn the Evolution of Keyword-Driven Test Suites
Rwemalika, Renaud UL; Kintis, Marinos UL; Papadakis, Mike UL et al

in 12th IEEE International Conference on Software Testing, Verification and Validation (2019)

Many companies rely on software testing to verify that their software products meet their requirements. However, test quality and, in particular, the quality of end-to-end testing is relatively hard to ... [more ▼]

Many companies rely on software testing to verify that their software products meet their requirements. However, test quality and, in particular, the quality of end-to-end testing is relatively hard to achieve. The problem becomes challenging when software evolves, as end-to-end test suites need to adapt and conform to the evolved software. Unfortunately, end-to-end tests are particularly fragile as any change in the application interface, e.g., application flow, location or name of graphical user interface elements, necessitates a change in the tests. This paper presents an industrial case study on the evolution of Keyword-Driven test suites, also known as Keyword-Driven Testing (KDT). Our aim is to demonstrate the problem of test maintenance, identify the benefits of Keyword-Driven Testing and overall improve the understanding of test code evolution (at the acceptance testing level). This information will support the development of automatic techniques, such as test refactoring and repair, and will motivate future research. To this end, we identify, collect and analyze test code changes across the evolution of industrial KDT test suites for a period of eight months. We show that the problem of test maintenance is largely due to test fragility (most commonly-performed changes are due to locator and synchronization issues) and test clones (over 30% of keywords are duplicated). We also show that the better test design of KDT test suites has the potential for drastically reducing (approximately 70%) the number of test code changes required to support software evolution. To further validate our results, we interview testers from BGL BNP Paribas and report their perceptions on the advantages and challenges of keyword-driven testing. [less ▲]

Detailed reference viewed: 107 (8 UL)
Full Text
Peer Reviewed
See detailCan we automate away the main challenges of end-to-end testing?
Rwemalika, Renaud UL; Kintis, Marinos UL; Papadakis, Mike UL et al

Scientific Conference (2018, December 11)

Agile methodologies enable companies to drastically increase software release pace and reduce time-to-market. In a rapidly changing environment, testing becomes a cornerstone of the software development ... [more ▼]

Agile methodologies enable companies to drastically increase software release pace and reduce time-to-market. In a rapidly changing environment, testing becomes a cornerstone of the software development process, guarding the system code base from the insertion of faults. To cater for this, many companies are migrating manual end-to-end tests to automated ones. This migration introduces several challenges to the practitioners. These challenges relate to difficulties in the creation of the automated tests, their maintenance and the evolution of the test code base. In this position paper, we discuss our preliminary results on such challenges and present two potential solutions to these problems, focusing on keyword-driven end-to-end tests. Our solutions leverage existing software artifacts, namely the test suite and an automatically-created model of the system under test, to support the evolution of keyword-driven test suites. [less ▲]

Detailed reference viewed: 103 (20 UL)
Full Text
Peer Reviewed
See detailAre mutants really natural? A study on how “naturalness” helps mutant selection
Jimenez, Matthieu UL; Titcheu Chekam, Thierry UL; Cordy, Maxime UL et al

in Proceedings of 12th International Symposium on 
 Empirical Software Engineering and Measurement (ESEM'18) (2018, October 11)

Background: Code is repetitive and predictable in a way that is similar to the natural language. This means that code is ``natural'' and this ``naturalness'' can be captured by natural language modelling ... [more ▼]

Background: Code is repetitive and predictable in a way that is similar to the natural language. This means that code is ``natural'' and this ``naturalness'' can be captured by natural language modelling techniques. Such models promise to capture the program semantics and identify source code parts that `smell', i.e., they are strange, badly written and are generally error-prone (likely to be defective). Aims: We investigate the use of natural language modelling techniques in mutation testing (a testing technique that uses artificial faults). We thus, seek to identify how well artificial faults simulate real ones and ultimately understand how natural the artificial faults can be. %We investigate this question in a fault revelation perspective. Our intuition is that natural mutants, i.e., mutants that are predictable (follow the implicit coding norms of developers), are semantically useful and generally valuable (to testers). We also expect that mutants located on unnatural code locations (which are generally linked with error-proneness) to be of higher value than those located on natural code locations. Method: Based on this idea, we propose mutant selection strategies that rank mutants according to a) their naturalness (naturalness of the mutated code), b) the naturalness of their locations (naturalness of the original program statements) and c) their impact on the naturalness of the code that they apply to (naturalness differences between original and mutated statements). We empirically evaluate these issues on a benchmark set of 5 open-source projects, involving more than 100k mutants and 230 real faults. Based on the fault set we estimate the utility (i.e. capability to reveal faults) of mutants selected on the basis of their naturalness, and compare it against the utility of randomly selected mutants. Results: Our analysis shows that there is no link between naturalness and the fault revelation utility of mutants. We also demonstrate that the naturalness-based mutant selection performs similar (slightly worse) to the random mutant selection. Conclusions: Our findings are negative but we consider them interesting as they confute a strong intuition, i.e., fault revelation is independent of the mutants' naturalness. [less ▲]

Detailed reference viewed: 82 (13 UL)
Full Text
Peer Reviewed
See detailHow effective are mutation testing tools? An empirical analysis of Java mutation testing tools with manual analysis and real faults
Kintis, Marinos UL; Papadakis, Mike UL; Papadopoulos, Andreas et al

in Empirical Software Engineering (2018)

Mutation analysis is a well-studied, fault-based testing technique. It requires testers to design tests based on a set of artificial defects. The defects help in performing testing activities by measuring ... [more ▼]

Mutation analysis is a well-studied, fault-based testing technique. It requires testers to design tests based on a set of artificial defects. The defects help in performing testing activities by measuring the ratio that is revealed by the candidate tests. Unfortunately, applying mutation to real-world programs requires automated tools due to the vast number of defects involved. In such a case, the effectiveness of the method strongly depends on the peculiarities of the employed tools. Thus, when using automated tools, their implementation inadequacies can lead to inaccurate results. To deal with this issue, we cross-evaluate four mutation testing tools for Java, namely PIT, muJava, Major and the research version of PIT, PITRV, with respect to their fault-detection capabilities. We investigate the strengths of the tools based on: a) a set of real faults and b) manual analysis of the mutants they introduce. We find that there are large differences between the tools’ effectiveness and demonstrate that no tool is able to subsume the others. We also provide results indicating the application cost of the method. Overall, we find that PITRV achieves the best results. In particular, PITRV outperforms the other tools by finding 6% more faults than the other tools combined. [less ▲]

Detailed reference viewed: 110 (8 UL)
Full Text
Peer Reviewed
See detailDetecting Trivial Mutant Equivalences via Compiler Optimisations
Kintis, Marinos UL; Papadakis, Mike UL; Jia, Yue et al

in IEEE Transactions on Software Engineering (2017)

Detailed reference viewed: 102 (4 UL)
Full Text
Peer Reviewed
See detailMutation Testing Advances: An Analysis and Survey
Papadakis, Mike UL; Kintis, Marinos UL; Zhang, Jie et al

in Advances in Computers (2017)

Detailed reference viewed: 802 (44 UL)
Full Text
See detailOn the Naturalness of Mutants
Jimenez, Matthieu UL; Cordy, Maxime UL; Kintis, Marinos UL et al

E-print/Working paper (2017)

Detailed reference viewed: 90 (10 UL)
Full Text
Peer Reviewed
See detailAssessing and Improving the Mutation Testing Practice of PIT
Laurent, Thomas; Papadakis, Mike UL; Kintis, Marinos UL et al

in 10th IEEE International Conference on Software Testing, Verification and Validation (2017)

Detailed reference viewed: 91 (4 UL)
Full Text
Peer Reviewed
See detailAnalysing and Comparing the Effectiveness of Mutation Testing Tools: A Manual Study
Kintis, Marinos UL; Papadakis, Mike UL; Papadopoulos, Andreas et al

in International Working Conference on Source Code Analysis and Manipulation (SCAM'16) (2016)

Detailed reference viewed: 99 (8 UL)