References of "Information and Software Technology"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailCrex: Predicting patch correctness in automated repair of C programs through transfer learning of execution semantics
Yan, Dapeng; Liu, Kui; Niu, Yuqing et al

in Information and Software Technology (2022), 152

Detailed reference viewed: 26 (0 UL)
Full Text
Peer Reviewed
See detailTest Suite Generation with the Many Independent Objective (MIO) Algorithm
Arcuri, Andrea UL

in Information and Software Technology (2018), 104(December), 195-206

Context: Automatically generating test suites is intrinsically a multi-objective problem, as any of the testing targets (e.g, statements to execute or mutants to kill) is an objective on its own. Test ... [more ▼]

Context: Automatically generating test suites is intrinsically a multi-objective problem, as any of the testing targets (e.g, statements to execute or mutants to kill) is an objective on its own. Test suite generation has peculiarities that are quite different from other more regular optimisation problems. For example, given an existing test suite, one can add more tests to cover the remaining objectives. One would like the smallest number of small tests to cover as many objectives as possible, but that is a secondary goal compared to covering those targets in the first place. Furthermore, the amount of objectives in software testing can quickly become unmanageable, in the order of (tens/hundreds of) thousands, especially for system testing of industrial size systems. Objective: To overcome these issues, different techniques have been proposed, like for example the Whole Test Suite (WTS) approach and the Many-Objective Sorting Algorithm (MOSA). However, those techniques might not scale well to very large numbers of objectives and limited search budgets (a typical case in system testing). In this paper, we propose a novel algorithm, called Many Independent Objective (MIO) algorithm. This algorithm is designed and tailored based on the specific properties of test suite generation. Method: An empirical study was carried out for test suite generation on a series of artificial examples and seven RESTful API web services. The \evo system test generation tool was used, where MIO, MOSA, WTS and random search were compared. Results: The presented MIO algorithm resulted having the best overall performance, but was not the best on all problems. Conclusion: The novel presented MIO algorithm is a step forward in the automation of test suite generation for system testing. However, there are still properties of system testing that can be exploited to achieve even better results. [less ▲]

Detailed reference viewed: 186 (25 UL)
Full Text
Peer Reviewed
See detailAn Empirical Evaluation of Evolutionary Algorithms for Unit Test Suite Generation
Campos, Jose; Ge, Yan; Albunian, Nasser et al

in Information and Software Technology (2018), 104(December), 207-235

Context: Evolutionary algorithms have been shown to be e ective at generating unit test suites optimised for code coverage. While many speci c aspects of these algorithms have been evaluated in detail (e ... [more ▼]

Context: Evolutionary algorithms have been shown to be e ective at generating unit test suites optimised for code coverage. While many speci c aspects of these algorithms have been evaluated in detail (e.g., test length and di erent kinds of techniques aimed at improving performance, like seeding), the in uence of the choice of evolutionary algorithm has to date seen less attention in the literature. Objective: Since it is theoretically impossible to design an algorithm that is the best on all possible problems, a common approach in software engineering problems is to rst try the most common algorithm, a Genetic Algorithm, and only afterwards try to re ne it or compare it with other algorithms to see if any of them is more suited for the addressed problem. The objective of this paper is to perform this analysis, in order to shed light on the in uence of the search algorithm applied for unit test generation. Method: We empirically evaluate thirteen di erent evolutionary algorithms and two random approaches on a selection of non-trivial open source classes. All algorithms are implemented in the EvoSuite test generation tool, which includes recent optimisations such as the use of an archive during the search and optimisation for multiple coverage criteria. Results: Our study shows that the use of a test archive makes evolutionary algorithms clearly better than random testing, and it con rms that the DynaMOSA many-objective search algorithm is the most e ective algorithm for unit test generation. Conclusions: Our results show that the choice of algorithm can have a substantial in uence on the performance of whole test suite optimisation. Although we can make a recommendation on which algorithm to use in practice, no algorithm is clearly superior in all cases, suggesting future work on improved search algorithms for unit test generation [less ▲]

Detailed reference viewed: 247 (63 UL)
Full Text
Peer Reviewed
See detailFeature location benchmark for extractive software product line adoption research using realistic and synthetic Eclipse variants
Martinez, Jabier; Ziadi, Tewfik; Papadakis, Mike UL et al

in Information and Software Technology (2018)

Detailed reference viewed: 183 (5 UL)
Full Text
Peer Reviewed
See detailModeling Security and Privacy Requirements: a Use Case-Driven Approach
Mai, Xuan Phu UL; Göknil, Arda UL; Shar, Lwin Khin et al

in Information and Software Technology (2018), 100

Context: Modern internet-based services, ranging from food-delivery to home-caring, leverage the availability of multiple programmable devices to provide handy services tailored to end-user needs. These ... [more ▼]

Context: Modern internet-based services, ranging from food-delivery to home-caring, leverage the availability of multiple programmable devices to provide handy services tailored to end-user needs. These services are delivered through an ecosystem of device-specific software components and interfaces (e.g., mobile and wearable device applications). Since they often handle private information (e.g., location and health status), their security and privacy requirements are of crucial importance. Defining and analyzing those requirements is a significant challenge due to the multiple types of software components and devices integrated into software ecosystems. Each software component presents peculiarities that often depend on the context and the devices the component interact with, and that must be considered when dealing with security and privacy requirements. Objective: In this paper, we propose, apply, and assess a modeling method that supports the specification of security and privacy requirements in a structured and analyzable form. Our motivation is that, in many contexts, use cases are common practice for the elicitation of functional requirements and should also be adapted for describing security requirements. Method: We integrate an existing approach for modeling security and privacy requirements in terms of security threats, their mitigations, and their relations to use cases in a misuse case diagram. We introduce new security-related templates, i.e., a mitigation template and a misuse case template for specifying mitigation schemes and misuse case specifications in a structured and analyzable manner. Natural language processing can then be used to automatically report inconsistencies among artifacts and between the templates and specifications. Results: We successfully applied our approach to an industrial healthcare project and report lessons learned and results from structured interviews with engineers. Conclusion: Since our approach supports the precise specification and analysis of security threats, threat scenarios and their mitigations, it also supports decision making and the analysis of compliance to standards. [less ▲]

Detailed reference viewed: 441 (39 UL)
Full Text
Peer Reviewed
See detailStatic Analysis of Android Apps: A Systematic Literature Review
Li, Li UL; Bissyande, Tegawendé François D Assise UL; Papadakis, Mike UL et al

in Information and Software Technology (2017)

Context: Static analysis exploits techniques that parse program source code or bytecode, often traversing program paths to check some program properties. Static analysis approaches have been proposed for ... [more ▼]

Context: Static analysis exploits techniques that parse program source code or bytecode, often traversing program paths to check some program properties. Static analysis approaches have been proposed for different tasks, including for assessing the security of Android apps, detecting app clones, automating test cases generation, or for uncovering non-functional issues related to performance or energy. The literature thus has proposed a large body of works, each of which attempts to tackle one or more of the several challenges that program analysers face when dealing with Android apps. Objective: We aim to provide a clear view of the state-of-the-art works that statically analyse Android apps, from which we highlight the trends of static analysis approaches, pinpoint where the focus has been put, and enumerate the key aspects where future researches are still needed. Method: We have performed a systematic literature review (SLR) which involves studying 124 research papers published in software engineering, programming languages and security venues in the last 5 years (January 2011 - December 2015). This review is performed mainly in five dimensions: problems targeted by the approach, fundamental techniques used by authors, static analysis sensitivities considered, android characteristics taken into account and the scale of evaluation performed. Results: Our in-depth examination has led to several key findings: 1) Static analysis is largely performed to uncover security and privacy issues; 2) The Soot framework and the Jimple intermediate representation are the most adopted basic support tool and format, respectively; 3) Taint analysis remains the most applied technique in research approaches; 4) Most approaches support several analysis sensitivities, but very few approaches consider path-sensitivity; 5) There is no single work that has been proposed to tackle all challenges of static analysis that are related to Android programming; and 6) Only a small portion of state-of-the-art works have made their artefacts publicly available. Conclusion: The research community is still facing a number of challenges for building approaches that are aware altogether of implicit-Flows, dynamic code loading features, reflective calls, native code and multi-threading, in order to implement sound and highly precise static analyzers. [less ▲]

Detailed reference viewed: 451 (13 UL)
Full Text
Peer Reviewed
See detailComprehending Malicious Android Apps By Mining Topic-Specific Data Flow Signatures
Yang, Xinli; Lo, David; Li, Li UL et al

in Information and Software Technology (2017)

Context: State-of-the-art works on automated detection of Android malware have leveraged app descriptions to spot anomalies w.r.t the functionality implemented, or have used data flow information as a ... [more ▼]

Context: State-of-the-art works on automated detection of Android malware have leveraged app descriptions to spot anomalies w.r.t the functionality implemented, or have used data flow information as a feature to discriminate malicious from benign apps. Although these works have yielded promising performance, we hypothesize that these performances can be improved by a better understanding of malicious behavior. Objective: To characterize malicious apps, we take into account both information on app descriptions, which are indicative of apps’ topics, and information on sensitive data flow, which can be relevant to discriminate malware from benign apps. Method: In this paper, we propose a topic-specific approach to malware comprehension based on app descriptions and data-flow information. First, we use an advanced topic model, adaptive LDA with GA, to cluster apps according to their descriptions. Then, we use information gain ratio of sensitive data flow information to build so-called “topic-specific data flow signatures”. Results: We conduct an empirical study on 3691 benign and 1612 malicious apps. We group them into 118 topics and generate topic-specific data flow signature. We verify the effectiveness of the topic-specific data flow signatures by comparing them with the overall data flow signature. In addition, we perform a deeper analysis on 25 representative topic-specific signatures and yield several implications. Conclusion: Topic-specific data flow signatures are efficient in highlighting the malicious behavior, and thus can help in characterizing malware. [less ▲]

Detailed reference viewed: 262 (13 UL)
Full Text
Peer Reviewed
See detailAn Extensive Systematic Review on the Model-Driven Development of Secure Systems
Nguyen, Phu Hong UL; Kramer, Max; Klein, Jacques UL et al

in Information and Software Technology (2015), 68(December 2015), 62-81

Context: Model-Driven Security (MDS) is as a specialised Model-Driven Engineering research area for supporting the development of secure systems. Over a decade of research on MDS has resulted in a large ... [more ▼]

Context: Model-Driven Security (MDS) is as a specialised Model-Driven Engineering research area for supporting the development of secure systems. Over a decade of research on MDS has resulted in a large number of publications. Objective: To provide a detailed analysis of the state of the art in MDS, a systematic literature review (SLR) is essential. Method: We conducted an extensive SLR on MDS. Derived from our research questions, we designed a rigorous, extensive search and selection process to identify a set of primary MDS studies that is as complete as possible. Our three-pronged search process consists of automatic searching, manual searching, and snowballing. After discovering and considering more than thousand relevant papers, we identified, strictly selected, and reviewed 108 MDS publications. Results: The results of our SLR show the overall status of the key artefacts of MDS, and the identified primary MDS studies. E.g. regarding security modelling artefact, we found that developing domain-specific languages plays a key role in many MDS approaches. The current limitations in each MDS artefact are pointed out and corresponding potential research directions are suggested. Moreover, we categorise the identified primary MDS studies into 5 significant MDS studies, and other emerging or less common MDS studies. Finally, some trend analyses of MDS research are given. Conclusion: Our results suggest the need for addressing multiple security concerns more systematically and simultaneously, for tool chains supporting the MDS development cycle, and for more empirical studies on the application of MDS methodologies. To the best of our knowledge, this SLR is the first in the field of Software Engineering that combines a snowballing strategy with database searching. This combination has delivered an extensive literature study on MDS. [less ▲]

Detailed reference viewed: 181 (13 UL)
Full Text
Peer Reviewed
See detailEvidence management for compliance of critical systems with safety standards: A survey on the state of practice
Nair, Sunil; de la Vara, Jose Luis; Sabetzadeh, Mehrdad UL et al

in Information and Software Technology (2015), 60

Context Demonstrating compliance of critical systems with safety standards involves providing convincing evidence that the requirements of a standard are adequately met. For large systems, practitioners ... [more ▼]

Context Demonstrating compliance of critical systems with safety standards involves providing convincing evidence that the requirements of a standard are adequately met. For large systems, practitioners need to be able to effectively collect, structure, and assess substantial quantities of evidence. Objective This paper aims to provide insights into how practitioners deal with safety evidence management for critical computer-based systems. The information currently available about how this activity is performed in the industry is very limited. Method We conducted a survey to determine practitioners’ perspectives and practices on safety evidence management. A total of 52 practitioners from 15 countries and 11 application domains responded to the survey. The respondents indicated the types of information used as safety evidence, how evidence is structured and assessed, how evidence evolution is addressed, and what challenges are faced in relation to provision of safety evidence. Results Our results indicate that (1) V&V artefacts, requirements specifications, and design specifications are the most frequently used safety evidence types, (2) evidence completeness checking and impact analysis are mostly performed manually at the moment, (3) text-based techniques are used more frequently than graphical notations for evidence structuring, (4) checklists and expert judgement are frequently used for evidence assessment, and (5) significant research effort has been spent on techniques that have seen little adoption in the industry. The main contributions of the survey are to provide an overall and up-to-date understanding of how the industry addresses safety evidence management, and to identify gaps in the state of the art. Conclusion We conclude that (1) V&V plays a major role in safety assurance, (2) the industry will clearly benefit from more tool support for collecting and manipulating safety evidence, and (3) future research on safety evidence management needs to place more emphasis on industrial applications. [less ▲]

Detailed reference viewed: 201 (8 UL)
Full Text
Peer Reviewed
See detailSearch-Based Automated Testing of Continuous Controllers: Framework, Tool Support, and Case Studies
Matinnejad, Reza UL; Nejati, Shiva UL; Briand, Lionel UL et al

in Information and Software Technology (2015), 57

Context. Testing and verification of automotive embedded software is a major chal- lenge. Software production in automotive domain comprises three stages: Developing automotive functions as Simulink ... [more ▼]

Context. Testing and verification of automotive embedded software is a major chal- lenge. Software production in automotive domain comprises three stages: Developing automotive functions as Simulink models, generating code from the models, and de- ploying the resulting code on hardware devices. Automotive software artifacts are sub- ject to three rounds of testing corresponding to the three production stages: Model-in- the-Loop (MiL), Software-in-the-Loop (SiL) and Hardware-in-the-Loop (HiL) testing. Objective. We study testing of continuous controllers at the Model-in-Loop (MiL) level where both the controller and the environment are represented by models and connected in a closed loop system. These controllers make up a large part of automotive functions, and monitor and control the operating conditions of physical devices. Method. We identify a set of requirements characterizing the behavior of continu- ous controllers, and develop a search-based technique based on random search, adap- tive random search, hill climbing and simulated annealing algorithms to automatically identify worst-case test scenarios which are utilized to generate test cases for these requirements. Results. We evaluated our approach by applying it to an industrial automotive con- troller (with 443 Simulink blocks) and to a publicly available controller (with 21 Simulink blocks). Our experience shows that automatically generated test cases lead to MiL level simulations indicating potential violations of the system requirements. Further, not only does our approach generate significantly better test cases faster than random test case generation, but it also achieves better results than test scenarios devised by domain experts. Finally, our generated test cases uncover discrepancies between envi- ronment models and the real world when they are applied at the Hardware-in-the-Loop(HiL) level. Conclusion. We propose an automated approach to MiL testing of continuous con- trollers using search. The approach is implemented in a tool and has been successfully applied to a real case study from the automotive domain. [less ▲]

Detailed reference viewed: 413 (68 UL)
Full Text
Peer Reviewed
See detailSimilarity testing for access control
Bertolino, A.; Daoudagh, S.; El Kateb, Donia UL et al

in Information and Software Technology (2015), 58

Context: Access control is among the most important security mechanisms, and XACML is the de facto standard for specifying, storing and deploying access control policies. Since it is critical that ... [more ▼]

Context: Access control is among the most important security mechanisms, and XACML is the de facto standard for specifying, storing and deploying access control policies. Since it is critical that enforced policies are correct, policy testing must be performed in an effective way to identify potential security flaws and bugs. In practice, exhaustive testing is impossible due to budget constraints. Therefore the tests need to be prioritized so that resources are focused on their most relevant subset. Objective: This paper tackles the issue of access control test prioritization. It proposes a new approach for access control test prioritization that relies on similarity. Method: The approach has been applied to several policies and the results have been compared to random prioritization (as a baseline). To assess the different prioritization criteria, we use mutation analysis and compute the mutation scores reached by each criterion. This helps assessing the rate of fault detection. Results: The empirical results indicate that our proposed approach is effective and its rate of fault detection is higher than that of random prioritization. Conclusion: We conclude that prioritization of access control test cases can be usefully based on similarity criteria. © 2014 Elsevier B.V. All rights reserved. [less ▲]

Detailed reference viewed: 202 (6 UL)
Full Text
Peer Reviewed
See detailEmpirical Evaluations on the Cost-Effectiveness of State-Based Testing: An Industrial Case Study
Holt, Nina; Briand, Lionel UL; Torkar, Richard

in Information and Software Technology (2014), 56(8), 890910

Detailed reference viewed: 230 (14 UL)
Full Text
Peer Reviewed
See detailChange Impact Analysis for Requirements: a Metamodeling Approach
Göknil, Arda UL; Kurtev, Ivan; van den Berg, Klaas et al

in Information and Software Technology (2014), 56(8), 950-972

Detailed reference viewed: 139 (8 UL)
Full Text
See detailSimilarity testing for access control
Bertolino, Antonia; daoudagh, said; El Kateb, Donia UL et al

in Information and Software Technology (2014)

Detailed reference viewed: 212 (19 UL)
Full Text
Peer Reviewed
See detailAn Extended Systematic Literature Review on Provision of Evidence for Safety Certification
Nair, Sunil; de la Vara, Jose Luis; Sabetzadeh, Mehrdad UL et al

in Information and Software Technology (2014), 56(7), 689717

Detailed reference viewed: 296 (21 UL)
Full Text
Peer Reviewed
See detailModel-based testing of global properties on large-scale distributed systems
Sunyé, G.; De Almeida, E. C.; Le Traon, Yves UL et al

in Information and Software Technology (2014), 56(7), 749-762

Context Large-scale distributed systems are becoming commonplace with the large popularity of peer-to-peer and cloud computing. The increasing importance of these systems contrasts with the lack of ... [more ▼]

Context Large-scale distributed systems are becoming commonplace with the large popularity of peer-to-peer and cloud computing. The increasing importance of these systems contrasts with the lack of integrated solutions to build trustworthy software. A key concern of any large-scale distributed system is the validation of global properties, which cannot be evaluated on a single node. Thus, it is necessary to gather data from distributed nodes and to aggregate these data into a global view. This turns out to be very challenging because of the system's dynamism that imposes very frequent changes in local values that affect global properties. This implies that the global view has to be frequently updated to ensure an accurate validation of global properties. Objective In this paper, we present a model-based approach to define a dynamic oracle for checking global properties. Our objective is to abstract relevant aspects of such systems into models. These models are updated at runtime, by monitoring the corresponding distributed system. Method We conduce real-scale experimental validation to evaluate the ability of our approach to check global properties. In this validation, we apply our approach to test two open-source implementations of distributed hash tables. The experiments are deployed on two clusters of 32 nodes. Results The experiments reveal an important defect on one implementation and show clear performance differences between the two implementations. The defect would not be detected without a global view of the system. Conclusion Testing global properties on distributed software consists of gathering data from different nodes and building a global view of the system, where properties are validated. This process requires a distributed test architecture and tools for representing and validating global properties. Model-based techniques are an expressive mean for building oracles that validate global properties on distributed systems. © 2014 Elsevier B.V. All rights reserved. [less ▲]

Detailed reference viewed: 176 (1 UL)
Full Text
Peer Reviewed
See detailSimPL: A Product-Line Modeling Methodology for Families of Integrated Control Systems
Behjati, Razieh; Yue, Tao; Briand, Lionel UL et al

in Information and Software Technology (2013), 55(3),

Detailed reference viewed: 233 (11 UL)
Full Text
Peer Reviewed
See detailSupporting the Verification of Compliance to Safety Standards via Model-Driven Engineering: Approach, Tool-Support and Empirical Validation
Panesar-Walawege, Rajwinder; Sabetzadeh, Mehrdad UL; Briand, Lionel UL

in Information and Software Technology (2013), 55(1), 836-864

Detailed reference viewed: 253 (35 UL)
Full Text
Peer Reviewed
See detailUsage and testability of AOP: An empirical study of AspectJ
Munoz, F.; Baudry, B.; Delamare, R. et al

in Information and Software Technology (2013), 55(2), 252-266

Context: Back in 2001, the MIT announced aspect-oriented programming as a key technology in the next 10 years. Nowadays, 10 years later, AOP is still not widely adopted. Objective: The objective of this ... [more ▼]

Context: Back in 2001, the MIT announced aspect-oriented programming as a key technology in the next 10 years. Nowadays, 10 years later, AOP is still not widely adopted. Objective: The objective of this work is to understand the current status of AOP practice through the analysis of open-source project which use AspectJ. Method: First we analyze different dimensions of AOP usage in 38 AspectJ projects. We investigate the degree of coupling between aspects and base programs, and the usage of the pointcut description language. A second part of our study focuses on testability as an indicator of maintainability. We also compare testability metrics on Java and AspectJ implementations of the HealthWatcher aspect-oriented benchmark. Results: The first part of the analysis reveals that the number of aspects does not increase with the size of the base program, that most aspects are woven in every places in the base program and that only a small portion of the pointcut language is used. The second part about testability reveals that AspectJ reduces the size of modules, increases their cohesion but also increases global coupling, thus introducing a negative impact on testability. Conclusion: These observations and measures reveal a major trend: AOP is currently used in a very cautious way. This cautious usage could come from a partial failure of AspectJ to deliver all promises of AOP, in particular an increased software maintainability. © 2012 Elsevier B.V. All rights reserved. [less ▲]

Detailed reference viewed: 167 (2 UL)
Peer Reviewed
See detailPredicting SQL injection and cross site scripting vulnerabilities through mining input sanitization patterns
Shar, Lwin Khin UL; Tan, Hee Beng Kuan

in Information and Software Technology (2013)

Detailed reference viewed: 147 (1 UL)