Results 41-60 of 97.
Bookmark and Share    
Full Text
Peer Reviewed
See detailTUNA: TUning Naturalness-based Analysis
Jimenez, Matthieu UL; Cordy, Maxime UL; Le Traon, Yves UL et al

in 34th IEEE International Conference on Software Maintenance and Evolution, Madrid, Spain, 26-28 September 2018 (2018, September 26)

Natural language processing techniques, in particular n-gram models, have been applied successfully to facilitate a number of software engineering tasks. However, in our related ICSME ’18 paper, we have ... [more ▼]

Natural language processing techniques, in particular n-gram models, have been applied successfully to facilitate a number of software engineering tasks. However, in our related ICSME ’18 paper, we have shown that the conclusions of a study can drastically change with respect to how the code is tokenized and how the used n-gram model is parameterized. These choices are thus of utmost importance, and one must carefully make them. To show this and allow the community to benefit from our work, we have developed TUNA (TUning Naturalness-based Analysis), a Java software artifact to perform naturalness-based analyses of source code. To the best of our knowledge, TUNA is the first open- source, end-to-end toolchain to carry out source code analyses based on naturalness. [less ▲]

Detailed reference viewed: 167 (11 UL)
Full Text
Peer Reviewed
See detailOn the impact of tokenizer and parameters on N-gram based Code Analysis
Jimenez, Matthieu UL; Cordy, Maxime UL; Le Traon, Yves UL et al

Scientific Conference (2018, September)

Recent research shows that language models, such as n-gram models, are useful at a wide variety of software engineering tasks, e.g., code completion, bug identification, code summarisation, etc. However ... [more ▼]

Recent research shows that language models, such as n-gram models, are useful at a wide variety of software engineering tasks, e.g., code completion, bug identification, code summarisation, etc. However, such models require the appropriate set of numerous parameters. Moreover, the different ways one can read code essentially yield different models (based on the different sequences of tokens). In this paper, we focus on n- gram models and evaluate how the use of tokenizers, smoothing, unknown threshold and n values impact the predicting ability of these models. Thus, we compare the use of multiple tokenizers and sets of different parameters (smoothing, unknown threshold and n values) with the aim of identifying the most appropriate combinations. Our results show that the Modified Kneser-Ney smoothing technique performs best, while n values are depended on the choice of the tokenizer, with values 4 or 5 offering a good trade-off between entropy and computation time. Interestingly, we find that tokenizers treating the code as simple text are the most robust ones. Finally, we demonstrate that the differences between the tokenizers are of practical importance and have the potential of changing the conclusions of a given experiment. [less ▲]

Detailed reference viewed: 220 (13 UL)
Full Text
Peer Reviewed
See detailTime to Clean Your Test Objectives
Marcozzi, Michaël; Bardin, Sébastien; Kosmatov, Nikolai et al

in 40th International Conference on Software Engineering, May 27 - 3 June 2018, Gothenburg, Sweden (2018, May)

Detailed reference viewed: 128 (2 UL)
Full Text
Peer Reviewed
See detailPredicting the Fault Revelation Utility of Mutants
Titcheu Chekam, Thierry UL; Papadakis, Mike UL; Bissyande, Tegawendé François D Assise UL et al

in 40th International Conference on Software Engineering, Gothenburg, Sweden, May 27 - 3 June 2018 (2018)

Detailed reference viewed: 262 (22 UL)
Full Text
Peer Reviewed
See detailMutant Quality Indicators
Papadakis, Mike UL; Titcheu Chekam, Thierry UL; Le Traon, Yves UL

in 13th International Workshop on Mutation Analysis (MUTATION'18) (2018)

Detailed reference viewed: 274 (20 UL)
Full Text
Peer Reviewed
See detailEnabling the Continous Analysis of Security Vulnerabilities with VulData7
Jimenez, Matthieu UL; Le Traon, Yves UL; Papadakis, Mike UL

in IEEE International Working Conference on Source Code Analysis and Manipulation (2018)

Detailed reference viewed: 292 (34 UL)
Full Text
Peer Reviewed
See detailA Hybrid Algorithm for Multi-objective Test Case Selection in Regression Testing
Delavernhe, Florian; Saber, Takfarinas; Papadakis, Mike UL et al

in IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (2018)

Detailed reference viewed: 69 (1 UL)
Full Text
Peer Reviewed
See detailAre Mutation Scores Correlated with Real Fault Detection? A Large Scale Empirical study on the Relationship Between Mutants and Real Faults
Papadakis, Mike UL; Shin, Donghwan; Yoo, Shin et al

in 40th International Conference on Software Engineering, May 27 - 3 June 2018, Gothenburg, Sweden (2018)

Detailed reference viewed: 217 (8 UL)
Full Text
Peer Reviewed
See detailFeature location benchmark for extractive software product line adoption research using realistic and synthetic Eclipse variants
Martinez, Jabier; Ziadi, Tewfik; Papadakis, Mike UL et al

in Information and Software Technology (2018)

Detailed reference viewed: 138 (5 UL)
Full Text
Peer Reviewed
See detailHow effective are mutation testing tools? An empirical analysis of Java mutation testing tools with manual analysis and real faults
Kintis, Marinos UL; Papadakis, Mike UL; Papadopoulos, Andreas et al

in Empirical Software Engineering (2018)

Mutation analysis is a well-studied, fault-based testing technique. It requires testers to design tests based on a set of artificial defects. The defects help in performing testing activities by measuring ... [more ▼]

Mutation analysis is a well-studied, fault-based testing technique. It requires testers to design tests based on a set of artificial defects. The defects help in performing testing activities by measuring the ratio that is revealed by the candidate tests. Unfortunately, applying mutation to real-world programs requires automated tools due to the vast number of defects involved. In such a case, the effectiveness of the method strongly depends on the peculiarities of the employed tools. Thus, when using automated tools, their implementation inadequacies can lead to inaccurate results. To deal with this issue, we cross-evaluate four mutation testing tools for Java, namely PIT, muJava, Major and the research version of PIT, PITRV, with respect to their fault-detection capabilities. We investigate the strengths of the tools based on: a) a set of real faults and b) manual analysis of the mutants they introduce. We find that there are large differences between the tools’ effectiveness and demonstrate that no tool is able to subsume the others. We also provide results indicating the application cost of the method. Overall, we find that PITRV achieves the best results. In particular, PITRV outperforms the other tools by finding 6% more faults than the other tools combined. [less ▲]

Detailed reference viewed: 209 (9 UL)
Full Text
Peer Reviewed
See detailModel-based mutant equivalence detection using automata language equivalence and simulations
Devroey, Xavier; Perrouin, Gilles; Papadakis, Mike UL et al

in Journal of Systems and Software (2018)

Detailed reference viewed: 116 (3 UL)
Full Text
Peer Reviewed
See detailAn Empirical Study on Mutation, Statement and Branch Coverage Fault Revelation that Avoids the Unreliable Clean Program Assumption
Titcheu Chekam, Thierry UL; Papadakis, Mike UL; Le Traon, Yves UL et al

in International Conference on Software Engineering (ICSE 2017) (2017, May 28)

Many studies suggest using coverage concepts, such as branch coverage, as the starting point of testing, while others as the most prominent test quality indicator. Yet the relationship between coverage ... [more ▼]

Many studies suggest using coverage concepts, such as branch coverage, as the starting point of testing, while others as the most prominent test quality indicator. Yet the relationship between coverage and fault-revelation remains unknown, yielding uncertainty and controversy. Most previous studies rely on the Clean Program Assumption, that a test suite will obtain similar coverage for both faulty and fixed (‘clean’) program versions. This assumption may appear intuitive, especially for bugs that denote small semantic deviations. However, we present evidence that the Clean Program Assumption does not always hold, thereby raising a critical threat to the validity of previous results. We then conducted a study using a robust experimental methodology that avoids this threat to validity, from which our primary finding is that strong mutation testing has the highest fault revelation of four widely-used criteria. Our findings also revealed that fault revelation starts to increase significantly only once relatively high levels of coverage are attained. [less ▲]

Detailed reference viewed: 419 (43 UL)
Full Text
Peer Reviewed
See detailDetecting Trivial Mutant Equivalences via Compiler Optimisations
Kintis, Marinos UL; Papadakis, Mike UL; Jia, Yue et al

in IEEE Transactions on Software Engineering (2017)

Detailed reference viewed: 202 (8 UL)
Full Text
See detailOn the Naturalness of Mutants
Jimenez, Matthieu UL; Cordy, Maxime UL; Kintis, Marinos UL et al

E-print/Working paper (2017)

Detailed reference viewed: 157 (15 UL)
Full Text
Peer Reviewed
See detailStatic Analysis of Android Apps: A Systematic Literature Review
Li, Li UL; Bissyande, Tegawendé François D Assise UL; Papadakis, Mike UL et al

in Information and Software Technology (2017)

Context: Static analysis exploits techniques that parse program source code or bytecode, often traversing program paths to check some program properties. Static analysis approaches have been proposed for ... [more ▼]

Context: Static analysis exploits techniques that parse program source code or bytecode, often traversing program paths to check some program properties. Static analysis approaches have been proposed for different tasks, including for assessing the security of Android apps, detecting app clones, automating test cases generation, or for uncovering non-functional issues related to performance or energy. The literature thus has proposed a large body of works, each of which attempts to tackle one or more of the several challenges that program analysers face when dealing with Android apps. Objective: We aim to provide a clear view of the state-of-the-art works that statically analyse Android apps, from which we highlight the trends of static analysis approaches, pinpoint where the focus has been put, and enumerate the key aspects where future researches are still needed. Method: We have performed a systematic literature review (SLR) which involves studying 124 research papers published in software engineering, programming languages and security venues in the last 5 years (January 2011 - December 2015). This review is performed mainly in five dimensions: problems targeted by the approach, fundamental techniques used by authors, static analysis sensitivities considered, android characteristics taken into account and the scale of evaluation performed. Results: Our in-depth examination has led to several key findings: 1) Static analysis is largely performed to uncover security and privacy issues; 2) The Soot framework and the Jimple intermediate representation are the most adopted basic support tool and format, respectively; 3) Taint analysis remains the most applied technique in research approaches; 4) Most approaches support several analysis sensitivities, but very few approaches consider path-sensitivity; 5) There is no single work that has been proposed to tackle all challenges of static analysis that are related to Android programming; and 6) Only a small portion of state-of-the-art works have made their artefacts publicly available. Conclusion: The research community is still facing a number of challenges for building approaches that are aware altogether of implicit-Flows, dynamic code loading features, reflective calls, native code and multi-threading, in order to implement sound and highly precise static analyzers. [less ▲]

Detailed reference viewed: 370 (13 UL)
Full Text
Peer Reviewed
See detailTowards Security-aware Mutation Testing
Loise, Thomas; Devroey, Xavier; Perrouin, Gilles et al

in The 12th International Workshop on Mutation Analysis (Mutation 2017) (2017)

Detailed reference viewed: 167 (3 UL)
Full Text
Peer Reviewed
See detailAssessing and Improving the Mutation Testing Practice of PIT
Laurent, Thomas; Papadakis, Mike UL; Kintis, Marinos UL et al

in 10th IEEE International Conference on Software Testing, Verification and Validation (2017)

Detailed reference viewed: 187 (9 UL)
Full Text
Peer Reviewed
See detailAutomata Language Equivalence vs. Simulations for Model-based Mutant Equivalence: An Empirical Evaluation
Devroey, Xavier; Perrouin, Gilles UL; Papadakis, Mike UL et al

in 10th IEEE International Conference on Software Testing, Verification and Validation (ICST 2017) (2017)

Detailed reference viewed: 125 (5 UL)
Full Text
Peer Reviewed
See detailAn Empirical Analysis of Vulnerabilities in OpenSSL and the Linux Kernel
Jimenez, Matthieu UL; Papadakis, Mike UL; Le Traon, Yves UL

in 2016 Asia-Pacific Software Engineering Conference (APSEC) (2016, December)

Vulnerabilities are one of the main concerns faced by practitioners when working with security critical applications. Unfortunately, developers and security teams, even experienced ones, fail to identify ... [more ▼]

Vulnerabilities are one of the main concerns faced by practitioners when working with security critical applications. Unfortunately, developers and security teams, even experienced ones, fail to identify many of them with severe consequences. Vulnerabilities are hard to discover since they appear in various forms, caused by many different issues and their identification requires an attacker’s mindset. In this paper, we aim at increasing the understanding of vulnerabilities by investigating their characteristics on two major open-source software systems, i.e., the Linux kernel and OpenSSL. In particular, we seek to analyse and build a profile for vulnerable code, which can ultimately help researchers in building automated approaches like vulnerability prediction models. Thus, we examine the location, criticality and category of vulnerable code along with its relation with software metrics. To do so, we collect more than 2,200 vulnerable files accounting for 863 vulnerabilities and compute more than 35 software metrics. Our results indicate that while 9 Common Weakness Enumeration (CWE) types of vulnerabilities are prevalent, only 3 of them are critical in OpenSSL and 2 of them in the Linux kernel. They also indicate that different types of vulnerabilities have different characteristics, i.e., metric profiles, and that vulnerabilities of the same type have different profiles in the two projects we examined. We also found that the file structure of the projects can provide useful information related to the vulnerabilities. Overall, our results demonstrate the need for making project specific approaches that focus on specific types of vulnerabilities. [less ▲]

Detailed reference viewed: 301 (17 UL)
Full Text
Peer Reviewed
See detailVulnerability Prediction Models: A case study on the Linux Kernel
Jimenez, Matthieu UL; Papadakis, Mike UL; Le Traon, Yves UL

in 16th IEEE International Working Conference on Source Code Analysis and Manipulation, SCAM 2016, Raleigh, US, October 2-3, 2016 (2016, October)

To assist the vulnerability identification process, researchers proposed prediction models that highlight (for inspection) the most likely to be vulnerable parts of a system. In this paper we aim at ... [more ▼]

To assist the vulnerability identification process, researchers proposed prediction models that highlight (for inspection) the most likely to be vulnerable parts of a system. In this paper we aim at making a reliable replication and comparison of the main vulnerability prediction models. Thus, we seek for determining their effectiveness, i.e., their ability to distinguish between vulnerable and non-vulnerable components, in the context of the Linux Kernel, under different scenarios. To achieve the above-mentioned aims, we mined vulnerabilities reported in the National Vulnerability Database and created a large dataset with all vulnerable components of Linux from 2005 to 2016. Based on this, we then built and evaluated the prediction models. We observe that an approach based on the header files included and on function calls performs best when aiming at future vulnerabilities, while text mining is the best technique when aiming at random instances. We also found that models based on code metrics perform poorly. We show that in the context of the Linux kernel, vulnerability prediction models can be superior to random selection and relatively precise. Thus, we conclude that practitioners have a valuable tool for prioritizing their security inspection efforts. [less ▲]

Detailed reference viewed: 453 (32 UL)