References of "Briand, Lionel 50001049"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailModeling Data Protection and Privacy: Application and Experience with GDPR
Torre, Damiano UL; Alferez, Mauricio UL; Soltana, Ghanem UL et al

in Software and Systems Modeling (in press)

In Europe and indeed worldwide, the Gen- eral Data Protection Regulation (GDPR) provides pro- tection to individuals regarding their personal data in the face of new technological developments. GDPR is ... [more ▼]

In Europe and indeed worldwide, the Gen- eral Data Protection Regulation (GDPR) provides pro- tection to individuals regarding their personal data in the face of new technological developments. GDPR is widely viewed as the benchmark for data protection and privacy regulations that harmonizes data privacy laws across Europe. Although the GDPR is highly ben- e cial to individuals, it presents signi cant challenges for organizations monitoring or storing personal infor- mation. Since there is currently no automated solution with broad industrial applicability, organizations have no choice but to carry out expensive manual audits to ensure GDPR compliance. In this paper, we present a complete GDPR UML model as a rst step towards de- signing automated methods for checking GDPR compli- ance. Given that the practical application of the GDPR is infuenced by national laws of the EU Member States,we suggest a two-tiered description of the GDPR, generic and specialized. In this paper, we provide (1) the GDPR conceptual model we developed with complete trace- ability from its classes to the GDPR, (2) a glossary to help understand the model, (3) the plain-English de- scription of 35 compliance rules derived from GDPR along with their encoding in OCL, and (4) the set of 20 variations points derived from GDPR to specialize the generic model. We further present the challenges we faced in our modeling endeavor, the lessons we learned from it, and future directions for research. [less ▲]

Detailed reference viewed: 33 (3 UL)
Full Text
Peer Reviewed
See detailA Theoretical Framework for Understanding the Relationship between Log Parsing and Anomaly Detection
Shin, Donghwan UL; Khan, Zanis Ali UL; Bianculli, Domenico UL et al

in Proceedings of the 21st International Conference on Runtime Verification (in press)

Log-based anomaly detection identifies systems' anomalous behaviors by analyzing system runtime information recorded in logs. While many approaches have been proposed, all of them have in common an ... [more ▼]

Log-based anomaly detection identifies systems' anomalous behaviors by analyzing system runtime information recorded in logs. While many approaches have been proposed, all of them have in common an essential pre-processing step called log parsing. This step is needed because automated log analysis requires structured input logs, whereas original logs contain semi-structured text printed by logging statements. Log parsing bridges this gap by converting the original logs into structured input logs fit for anomaly detection. Despite the intrinsic dependency between log parsing and anomaly detection, no existing work has investigated the impact of the "quality" of log parsing results on anomaly detection. In particular, the concept of "ideal" log parsing results with respect to anomaly detection has not been formalized yet. This makes it difficult to determine, upon obtaining inaccurate results from anomaly detection, if (and why) the root cause for such results lies in the log parsing step. In this short paper, we lay the theoretical foundations for defining the concept of "ideal" log parsing results for anomaly detection. Based on these foundations, we discuss practical implications regarding the identification and localization of root causes, when dealing with inaccurate anomaly detection, and the identification of irrelevant log messages. [less ▲]

Detailed reference viewed: 99 (15 UL)
Full Text
Peer Reviewed
See detailAn Automated Framework for the Extraction of Semantic Legal Metadata from Legal Texts
Sleimi, Amin UL; Sannier, Nicolas UL; Sabetzadeh, Mehrdad UL et al

in Empirical Software Engineering (in press)

Semantic legal metadata provides information that helps with understanding and interpreting legal provisions. Such metadata is therefore important for the systematic analysis of legal requirements ... [more ▼]

Semantic legal metadata provides information that helps with understanding and interpreting legal provisions. Such metadata is therefore important for the systematic analysis of legal requirements. However, manually enhancing a large legal corpus with semantic metadata is prohibitively expensive. Our work is motivated by two observations: (1) the existing requirements engineering (RE) literature does not provide a harmonized view on the semantic metadata types that are useful for legal requirements analysis; (2) automated support for the extraction of semantic legal metadata is scarce, and it does not exploit the full potential of artificial intelligence technologies, notably natural language processing (NLP) and machine learning (ML). Our objective is to take steps toward overcoming these limitations. To do so, we review and reconcile the semantic legal metadata types proposed in the RE literature. Subsequently, we devise an automated extraction approach for the identified metadata types using NLP and ML. We evaluate our approach through two case studies over the Luxembourgish legislation. Our results indicate a high accuracy in the generation of metadata annotations. In particular, in the two case studies, we were able to obtain precision scores of 97,2% and 82,4%, and recall scores of 94,9% and 92,4%. [less ▲]

Detailed reference viewed: 82 (5 UL)
Full Text
Peer Reviewed
See detailSupporting DNN Safety Analysis and Retraining through Heatmap-based Unsupervised Learning
Fahmy, Hazem UL; Pastore, Fabrizio UL; Bagherzadeh, Mojtaba et al

in IEEE Transactions on Reliability (in press)

Deep neural networks (DNNs) are increasingly im- portant in safety-critical systems, for example in their perception layer to analyze images. Unfortunately, there is a lack of methods to ensure the ... [more ▼]

Deep neural networks (DNNs) are increasingly im- portant in safety-critical systems, for example in their perception layer to analyze images. Unfortunately, there is a lack of methods to ensure the functional safety of DNN-based components. We observe three major challenges with existing practices regarding DNNs in safety-critical systems: (1) scenarios that are underrepresented in the test set may lead to serious safety violation risks, but may, however, remain unnoticed; (2) char- acterizing such high-risk scenarios is critical for safety analysis; (3) retraining DNNs to address these risks is poorly supported when causes of violations are difficult to determine. To address these problems in the context of DNNs analyzing images, we propose HUDD, an approach that automatically supports the identification of root causes for DNN errors. HUDD identifies root causes by applying a clustering algorithm to heatmaps capturing the relevance of every DNN neuron on the DNN outcome. Also, HUDD retrains DNNs with images that are automatically selected based on their relatedness to the identified image clusters. We evaluated HUDD with DNNs from the automotive domain. HUDD was able to identify all the distinct root causes of DNN errors, thus supporting safety analysis. Also, our retraining approach has shown to be more effective at improving DNN accuracy than existing approaches. [less ▲]

Detailed reference viewed: 87 (8 UL)
Full Text
Peer Reviewed
See detailAutomated Reverse Engineering of Role-based Access Control Policies of Web Applications
Le, Ha Thanh UL; Shar, Lwin Khin UL; Bianculli, Domenico UL et al

in Journal of Systems and Software (in press)

Access control (AC) is an important security mechanism used in software systems to restrict access to sensitive resources. Therefore, it is essential to validate the correctness of AC implementations with ... [more ▼]

Access control (AC) is an important security mechanism used in software systems to restrict access to sensitive resources. Therefore, it is essential to validate the correctness of AC implementations with respect to policy specifications or intended access rights. However, in practice, AC policy specifications are often missing or poorly documented; in some cases, AC policies are hard-coded in business logic implementations. This leads to difficulties in validating the correctness of policy implementations and detecting AC defects. In this paper, we present a semi-automated framework for reverse-engineering of AC policies from Web applications. Our goal is to learn and recover role-based access control (RBAC) policies from implementations, which are then used to validate implemented policies and detect AC issues. Our framework, built on top of a suite of security tools, automatically explores a given Web application, mines domain input specifications from access logs, and systematically generates and executes more access requests using combinatorial test generation. To learn policies, we apply machine learning on the obtained data to characterize relevant attributes that influence AC. Finally, the inferred policies are presented to the security engineer, for validation with respect to intended access rights and for detecting AC issues. Inconsistent and insufficient policies are highlighted as potential AC issues, being either vulnerabilities or implementation errors. We evaluated our approach on four Web applications (three open-source and a proprietary one built by our industry partner) in terms of the correctness of inferred policies. We also evaluated the usefulness of our approach by investigating whether it facilitates the detection of AC issues. The results show that 97.8% of the inferred policies are correct with respect to the actual AC implementation; the analysis of these policies led to the discovery of 64 AC issues that were reported to the developers. [less ▲]

Detailed reference viewed: 24 (0 UL)
Full Text
Peer Reviewed
See detailCombining Genetic Programming and Model Checking to Generate Environment Assumptions
Gaaloul, Khouloud UL; Menghi, Claudio UL; Nejati, Shiva UL et al

in IEEE Transactions on Software Engineering (in press)

Software verification may yield spurious failures when environment assumptions are not accounted for. Environment assumptions are the expectations that a system or a component makes about its operational ... [more ▼]

Software verification may yield spurious failures when environment assumptions are not accounted for. Environment assumptions are the expectations that a system or a component makes about its operational environment and are often specified in terms of conditions over the inputs of that system or component. In this article, we propose an approach to automatically infer environment assumptions for Cyber-Physical Systems (CPS). Our approach improves the state-of-the-art in three different ways: First, we learn assumptions for complex CPS models involving signal and numeric variables; second, the learned assumptions include arithmetic expressions defined over multiple variables; third, we identify the trade-off between soundness and coverage of environment assumptions and demonstrate the flexibility of our approach in prioritizing either of these criteria. We evaluate our approach using a public domain benchmark of CPS models from Lockheed Martin and a component of a satellite control system from LuxSpace, a satellite system provider. The results show that our approach outperforms state-of-the-art techniques on learning assumptions for CPS models, and further, when applied to our industrial CPS model, our approach is able to learn assumptions that are sufficiently close to the assumptions manually developed by engineers to be of practical value. [less ▲]

Detailed reference viewed: 95 (26 UL)
Full Text
Peer Reviewed
See detailCan Offline Testing of Deep Neural Networks Replace Their Online Testing?
Ul Haq, Fitash UL; Shin, Donghwan UL; Nejati, Shiva UL et al

in Empirical Software Engineering (2021), 26(5),

We distinguish two general modes of testing for Deep Neural Networks (DNNs): Offline testing where DNNs are tested as individual units based on test datasets obtained without involving the DNNs under test ... [more ▼]

We distinguish two general modes of testing for Deep Neural Networks (DNNs): Offline testing where DNNs are tested as individual units based on test datasets obtained without involving the DNNs under test, and online testing where DNNs are embedded into a specific application environment and tested in a closed-loop mode in interaction with the application environment. Typically, DNNs are subjected to both types of testing during their development life cycle where offline testing is applied immediately after DNN training and online testing follows after offline testing and once a DNN is deployed within a specific application environment. In this paper, we study the relationship between offline and online testing. Our goal is to determine how offline testing and online testing differ or complement one another and if offline testing results can be used to help reduce the cost of online testing? Though these questions are generally relevant to all autonomous systems, we study them in the context of automated driving systems where, as study subjects, we use DNNs automating end-to-end controls of steering functions of self-driving vehicles. Our results show that offline testing is less effective than online testing as many safety violations identified by online testing could not be identified by offline testing, while large prediction errors generated by offline testing always led to severe safety violations detectable by online testing. Further, we cannot exploit offline testing results to reduce the cost of online testing in practice since we are not able to identify specific situations where offline testing could be as accurate as online testing in identifying safety requirement violations. [less ▲]

Detailed reference viewed: 69 (12 UL)
Full Text
Peer Reviewed
See detailLog-based Slicing for System-level Test Cases
Messaoudi, Salma UL; Shin, Donghwan UL; Panichella, Annibale et al

in Proceedings of ISSTA '21: 30th ACM SIGSOFT International Symposium on Software Testing and Analysis (2021, July)

Regression testing is arguably one of the most important activities in software testing. However, its cost-effectiveness and usefulness can be largely impaired by complex system test cases that are poorly ... [more ▼]

Regression testing is arguably one of the most important activities in software testing. However, its cost-effectiveness and usefulness can be largely impaired by complex system test cases that are poorly designed (e.g., test cases containing multiple test scenarios combined into a single test case) and that require a large amount of time and resources to run. One way to mitigate this issue is decomposing such system test cases into smaller, separate test cases---each of them with only one test scenario and with its corresponding assertions---so that the execution time of the decomposed test cases is lower than the original test cases, while the test effectiveness of the original test cases is preserved. This decomposition can be achieved with program slicing techniques, since test cases are software programs too. However, existing static and dynamic slicing techniques exhibit limitations when (1) the test cases use external resources, (2) code instrumentation is not a viable option, and (3) test execution is expensive. In this paper, we propose a novel approach, called DS3 (Decomposing System teSt caSe), which automatically decomposes a complex system test case into separate test case slices. The idea is to use test case execution logs, obtained from past regression testing sessions, to identify "hidden" dependencies in the slices generated by static slicing. Since logs include run-time information about the system under test, we can use them to extract access and usage of global resources and refine the slices generated by static slicing. We evaluated DS3 in terms of slicing effectiveness and compared it with a vanilla static slicing tool. We also compared the slices obtained by DS3 with the corresponding original system test cases, in terms of test efficiency and effectiveness. The evaluation results on one proprietary system and one open-source system show that DS3 is able to accurately identify the dependencies related to the usage of global resources, which vanilla static slicing misses. Moreover, the generated test case slices are, on average, 3.56 times faster than original system test cases and they exhibit no significant loss in terms of fault detection effectiveness. [less ▲]

Detailed reference viewed: 275 (27 UL)
Full Text
Peer Reviewed
See detailAutomatic Test Suite Generation for Key-Points Detection DNNs using Many-Objective Search (Experience Paper)
Ul Haq, Fitash UL; Shin, Donghwan UL; Briand, Lionel UL et al

in 2021 ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA) (2021, July)

Automatically detecting the positions of key-points (e.g., facial key-points or finger key-points) in an image is an essential problem in many applications, such as driver's gaze detection and drowsiness ... [more ▼]

Automatically detecting the positions of key-points (e.g., facial key-points or finger key-points) in an image is an essential problem in many applications, such as driver's gaze detection and drowsiness detection in automated driving systems. With the recent advances of Deep Neural Networks (DNNs), Key-Points detection DNNs (KP-DNNs) have been increasingly employed for that purpose. Nevertheless, KP-DNN testing and validation have remained a challenging problem because KP-DNNs predict many independent key-points at the same time---where each individual key-point may be critical in the targeted application---and images can vary a great deal according to many factors. In this paper, we present an approach to automatically generate test data for KP-DNNs using many-objective search. In our experiments, focused on facial key-points detection DNNs developed for an industrial automotive application, we show that our approach can generate test suites to severely mispredict, on average, more than 93% of all key-points. In comparison, random search-based test data generation can only severely mispredict 41% of them. Many of these mispredictions, however, are not avoidable and should not therefore be considered failures. We also empirically compare state-of-the-art, many-objective search algorithms and their variants, tailored for test suite generation. Furthermore, we investigate and demonstrate how to learn specific conditions, based on image characteristics (e.g., head posture and skin color), that lead to severe mispredictions. Such conditions serve as a basis for risk analysis or DNN retraining. [less ▲]

Detailed reference viewed: 59 (15 UL)
Full Text
Peer Reviewed
See detailOn Systematically Building a Controlled Natural Language for Functional Requirements
Veizaga Campero, Alvaro Mario UL; Alferez, Mauricio UL; Torre, Damiano UL et al

in Empirical Software Engineering (2021), 26(4), 79

[Context] Natural language (NL) is pervasive in software requirements specifications (SRSs). However, despite its popularity and widespread use, NL is highly prone to quality issues such as vagueness ... [more ▼]

[Context] Natural language (NL) is pervasive in software requirements specifications (SRSs). However, despite its popularity and widespread use, NL is highly prone to quality issues such as vagueness, ambiguity, and incompleteness. Controlled natural languages (CNLs) have been proposed as a way to prevent quality problems in requirements documents, while maintaining the flexibility to write and communicate requirements in an intuitive and universally understood manner. [Objective] In collaboration with an industrial partner from the financial domain, we systematically develop and evaluate a CNL, named Rimay, intended at helping analysts write functional requirements. [Method] We rely on Grounded Theory for building Rimay and follow well-known guidelines for conducting and reporting industrial case study research. [Results] Our main contributions are: (1) a qualitative methodology to systematically define a CNL for functional requirements; this methodology is intended to be general for use across information-system domains, (2) a CNL grammar to represent functional requirements; this grammar is derived from our experience in the financial domain, but should be applicable, possibly with adaptations, to other information-system domains, and (3) an empirical evaluation of our CNL (Rimay) through an industrial case study. Our contributions draw on 15 representative SRSs, collectively containing 3215 NL requirements statements from the financial domain. [Conclusion] Our evaluation shows that Rimay is expressive enough to capture, on average, 88% (405 out of 460) of the NL requirements statements in four previously unseen SRSs from the financial domain. [less ▲]

Detailed reference viewed: 342 (31 UL)
Full Text
Peer Reviewed
See detailUsing Domain-specific Corpora for Improved Handling of Ambiguity in Requirements
Ezzini, Saad UL; Abualhaija, Sallam UL; Arora, Chetan et al

in In Proceedings of the 43rd International Conference on Software Engineering (ICSE'21), Madrid 25-28 May 2021 (2021, May)

Ambiguity in natural-language requirements is a pervasive issue that has been studied by the requirements engineering community for more than two decades. A fully manual approach for addressing ambiguity ... [more ▼]

Ambiguity in natural-language requirements is a pervasive issue that has been studied by the requirements engineering community for more than two decades. A fully manual approach for addressing ambiguity in requirements is tedious and time-consuming, and may further overlook unacknowledged ambiguity – the situation where different stakeholders perceive a requirement as unambiguous but, in reality, interpret the requirement differently. In this paper, we propose an automated approach that uses natural language processing for handling ambiguity in requirements. Our approach is based on the automatic generation of a domain-specific corpus from Wikipedia. Integrating domain knowledge, as we show in our evaluation, leads to a significant positive improvement in the accuracy of ambiguity detection and interpretation. We scope our work to coordination ambiguity (CA) and prepositional-phrase attachment ambiguity (PAA) because of the prevalence of these types of ambiguity in natural-language requirements [1]. We evaluate our approach on 20 industrial requirements documents. These documents collectively contain more than 5000 requirements from seven distinct application domains. Over this dataset, our approach detects CA and PAA with an average precision of 80% and an average recall of 89% ( 90% for cases of unacknowledged ambiguity). The automatic interpretations that our approach yields have an average accuracy of 85%. Compared to baselines that use generic corpora, our approach, which uses domain-specific corpora, has 33% better accuracy in ambiguity detection and 16% better accuracy in interpretation. [less ▲]

Detailed reference viewed: 118 (11 UL)
Full Text
Peer Reviewed
See detailTrace-Checking CPS Properties: Bridging the Cyber-Physical Gap
Menghi, Claudio UL; Vigano, Enrico UL; Bianculli, Domenico UL et al

in Proceedings of the 43rd International Conference on Software Engineering (ICSE 2021) (2021, May)

Cyber-physical systems combine software and physical components. Specification-driven trace-checking tools for CPS usually provide users with a specification language to express the requirements of ... [more ▼]

Cyber-physical systems combine software and physical components. Specification-driven trace-checking tools for CPS usually provide users with a specification language to express the requirements of interest, and an automatic procedure to check whether these requirements hold on the execution traces of a CPS. Although there exist several specification languages for CPS, they are often not sufficiently expressive to allow the specification of complex CPS properties related to the software and the physical components and their interactions. In this paper, we propose (i) the Hybrid Logic of Signals (HLS), a logic-based language that allows the specification of complex CPS requirements, and (ii) ThEodorE, an efficient SMT-based trace-checking procedure. This procedure reduces the problem of checking a CPS requirement over an execution trace, to checking the satisfiability of an SMT formula. We evaluated our contributions by using a representative industrial case study in the satellite domain. We assessed the expressiveness of HLS by considering 212 requirements of our case study. HLS could express all the 212 requirements. We also assessed the applicability of ThEodorE by running the trace-checking procedure for 747 trace-requirement combinations. ThEodorE was able to produce a verdict in 74.5% of the cases. Finally, we compared HLS and ThEodorE with other specification languages and trace-checking tools from the literature. Our results show that, from a practical standpoint, our approach offers a better trade-off between expressiveness and performance. [less ▲]

Detailed reference viewed: 296 (25 UL)
Full Text
Peer Reviewed
See detailThEodorE: a Trace Checker for CPS Properties
Menghi, Claudio UL; Vigano, Enrico UL; Bianculli, Domenico UL et al

in Companion Proceedings of the 43rd International Conference on Software Engineering (2021, May)

ThEodorE is a trace checker for Cyber-Physical systems (CPS). It provides users with (i) a GUI editor for writing CPS requirements; (ii) an automatic procedure to check whether the requirements hold on ... [more ▼]

ThEodorE is a trace checker for Cyber-Physical systems (CPS). It provides users with (i) a GUI editor for writing CPS requirements; (ii) an automatic procedure to check whether the requirements hold on execution traces of a CPS. ThEodorE enables writing requirements using the Hybrid Logic of Signals (HLS), a novel, logic-based specification language to express CPS requirements. The trace checking procedure of ThEodorE reduces the problem of checking if a requirement holds on an execution trace to a satisfiability problem, which can be solved using off-the-shelf Satisfiability Modulo Theories (SMT) solvers. This artifact paper presents the tool support provided by ThEodorE. [less ▲]

Detailed reference viewed: 69 (4 UL)
Full Text
Peer Reviewed
See detailSignal-Based Properties of Cyber-Physical Systems: Taxonomy and Logic-based Characterization
Boufaied, Chaima UL; Jukss, Maris; Bianculli, Domenico UL et al

in Journal of Systems and Software (2021), 174

The behavior of a cyber-physical system (CPS) is usually defined in terms of the input and output signals processed by sensors and actuators. Requirements specifications of CPSs are typically expressed ... [more ▼]

The behavior of a cyber-physical system (CPS) is usually defined in terms of the input and output signals processed by sensors and actuators. Requirements specifications of CPSs are typically expressed using signal-based temporal properties. Expressing such requirements is challenging, because of (1) the many features that can be used to characterize a signal behavior; (2) the broad variation in expressiveness of the specification languages (i.e., temporal logics) used for defining signal-based temporal properties. Thus, system and software engineers need effective guidance on selecting appropriate signal behavior types and an adequate specification language, based on the type of requirements they have to define. In this paper, we present a taxonomy of the various types of signal-based properties and provide, for each type, a comprehensive and detailed description as well as a formalization in a temporal logic. Furthermore, we review the expressiveness of state-of-the-art signal-based temporal logics in terms of the property types identified in the taxonomy. Moreover, we report on the application of our taxonomy to classify the requirements specifications of an industrial case study in the aerospace domain, in order to assess the feasibility of using the property types included in our taxonomy and the completeness of the latter. [less ▲]

Detailed reference viewed: 254 (23 UL)
Full Text
Peer Reviewed
See detailMutation Analysis for Cyber-Physical Systems: Scalable Solutions and Results in the Space Domain
Cornejo Olivares, Oscar Eduardo UL; Pastore, Fabrizio UL; Briand, Lionel UL

in IEEE Transactions on Software Engineering (2021)

On-board embedded software developed for spaceflight systems (space software) must adhere to stringent software quality assurance procedures. For example, verification and validation activities are ... [more ▼]

On-board embedded software developed for spaceflight systems (space software) must adhere to stringent software quality assurance procedures. For example, verification and validation activities are typically performed and assessed by third party organizations. To further minimize the risk of human mistakes, space agencies, such as the European Space Agency (ESA), are looking for automated solutions for the assessment of software testing activities, which play a crucial role in this context. Though space software is our focus here, it should be noted that such software shares the above considerations, to a large extent, with embedded software in many other types of cyber-physical systems. Over the years, mutation analysis has shown to be a promising solution for the automated assessment of test suites; it consists of measuring the quality of a test suite in terms of the percentage of injected faults leading to a test failure. A number of optimization techniques, addressing scalability and accuracy problems, have been proposed to facilitate the industrial adoption of mutation analysis. However, to date, two major problems prevent space agencies from enforcing mutation analysis in space software development. First, there is uncertainty regarding the feasibility of applying mutation analysis optimization techniques in their context. Second, most of the existing techniques either can break the real-time requirements common in embedded software or cannot be applied when the software is tested in Software Validation Facilities, including CPU emulators and sensor simulators. In this paper, we enhance mutation analysis optimization techniques to enable their applicability to embedded software and propose a pipeline that successfully integrates them to address scalability and accuracy issues in this context, as described above. Further, we report on the largest study involving embedded software systems in the mutation analysis literature. Our research is part of a research project funded by ESA ESTEC involving private companies (GomSpace Luxembourg and LuxSpace) in the space sector. These industry partners provided the case studies reported in this paper; they include an on-board software system managing a microsatellite currently on-orbit, a set of libraries used in deployed cubesats, and a mathematical library certified by ESA. [less ▲]

Detailed reference viewed: 171 (19 UL)
Full Text
Peer Reviewed
See detailUncertainty-aware Specification and Analysis for Hardware-in-the-Loop Testing of Cyber Physical Systems
Shin, Seung Yeob UL; Chaouch, Karim UL; Nejati, Shiva UL et al

in Journal of Systems and Software (2021)

Hardware-in-the-loop (HiL) testing is important for developing cyber physical systems (CPS). HiL test cases manipulate hardware, are time-consuming and their behaviors are impacted by the uncertainties in ... [more ▼]

Hardware-in-the-loop (HiL) testing is important for developing cyber physical systems (CPS). HiL test cases manipulate hardware, are time-consuming and their behaviors are impacted by the uncertainties in the CPS environment. To mitigate the risks associated with HiL testing, engineers have to ensure that (1) test cases are well-behaved, e.g., they do not damage hardware, and (2) test cases can execute within a time budget. Leveraging the UML profile mechanism, we develop a domain-specific language, HITECS, for HiL test case specification. Using HITECS, we provide uncertainty-aware analysis methods to check the well-behavedness of HiL test cases. In addition, we provide a method to estimate the execution times of HiL test cases before the actual HiL testing. We apply HITECS to an industrial case study from the satellite domain. Our results show that: (1) HITECS helps engineers define more effective assertions to check HiL test cases, compared to the assertions defined without any systematic guidance; (2) HITECS verifies in practical time that HiL test cases are well-behaved; (3) HITECS is able to resolve uncertain parameters of HiL test cases by synthesizing conditions under which test cases are guaranteed to be well-behaved; and (4) HITECS accurately estimates HiL test case execution times. [less ▲]

Detailed reference viewed: 366 (33 UL)
Full Text
Peer Reviewed
See detailLeveraging Natural-language Requirements for Deriving Better Acceptance Criteria from Models
Veizaga Campero, Alvaro Mario UL; Alferez, Mauricio UL; Torre, Damiano UL et al

in Proceedings of 23rd ACM / IEEE International Conference on Model Driven Engineering Languages and Systems (MODELS) (2020, October)

In many software and systems development projects, analysts specify requirements using a combination of modeling and natural language (NL). In such situations, systematic acceptance testing poses a ... [more ▼]

In many software and systems development projects, analysts specify requirements using a combination of modeling and natural language (NL). In such situations, systematic acceptance testing poses a challenge because defining the acceptance criteria (AC) to be met by the system under test has to account not only for the information in the (requirements) model but also that in the NL requirements. In other words, neither models nor NL requirements per se provide a complete picture of the information content relevant to AC. Our work in this paper is prompted by the observation that a reconciliation of the information content in NL requirements and models is necessary for obtaining precise AC. We perform such reconciliation by devising an approach that automatically extracts AC-related information from NL requirements and helps modelers enrich their model with the extracted information. An existing AC derivation technique is then applied to the model that has now been enriched by the information extracted from NL requirements. Using a real case study from the financial domain, we evaluate the usefulness of the AC-related model enrichments recommended by our approach. Our evaluation results are very promising: Over our case study system, a group of five domain experts found 89% of the recommended enrichments relevant to AC and yet absent from the original model (precision of 89%). Furthermore, the experts could not pinpoint any additional information in the NL requirements which was relevant to AC but which had not already been brought to their attention by our approach (recall of 100%) [less ▲]

Detailed reference viewed: 231 (39 UL)
Full Text
Peer Reviewed
See detailAn AI-assisted Approach for Checking the Completeness of Privacy Policies Against GDPR
Torre, Damiano UL; Abualhaija, Sallam UL; Sabetzadeh, Mehrdad UL et al

in in Proceedings of the 28th IEEE International Requirements Engineering Conference (RE’20) (2020, September)

Detailed reference viewed: 405 (51 UL)
Full Text
Peer Reviewed
See detailTrace-Checking Signal-based Temporal Properties: A Model-Driven Approach
Boufaied, Chaima UL; Menghi, Claudio UL; Bianculli, Domenico UL et al

in Proceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering (ASE ’20) (2020, September)

Signal-based temporal properties (SBTPs) characterize the behavior of a system when its inputs and outputs are signals over time; they are very common for the requirements specification of cyber-physical ... [more ▼]

Signal-based temporal properties (SBTPs) characterize the behavior of a system when its inputs and outputs are signals over time; they are very common for the requirements specification of cyber-physical systems. Although there exist several specification languages for expressing SBTPs, such languages either do not easily allow the specification of important types of properties (such as spike or oscillatory behaviors), or are not supported by (efficient) trace-checking procedures. In this paper, we propose SB-TemPsy, a novel model-driven trace-checking approach for SBTPs. SB-TemPsy provides (i) SB-TemPsy-DSL, a domain-specific language that allows the specification of SBTPs covering the most frequent requirement types in cyber-physical systems, and (ii) SB-TemPsy-Check, an efficient, model-driven trace-checking procedure. This procedure reduces the problem of checking an SB-TemPsy-DSL property over an execution trace to the problem of evaluating an Object Constraint Language constraint on a model of the execution trace. We evaluated our contributions by assessing the expressiveness of SB-TemPsy-DSL and the applicability of SB-TemPsy-Check using a representative industrial case study in the satellite domain. SB-TemPsy-DSL could express 97% of the requirements of our case study and SB-TemPsy-Check yielded a trace-checking verdict in 87% of the cases, with an average checking time of 48.7 s. From a practical standpoint and compared to state-of-the-art alternatives, our approach strikes a better trade-off between expressiveness and performance as it supports a large set of property types that can be checked, in most cases, within practical time limits. [less ▲]

Detailed reference viewed: 310 (28 UL)
Full Text
Peer Reviewed
See detailComparing Offline and Online Testing of Deep Neural Networks: An Autonomous Car Case Study
Ul Haq, Fitash UL; Shin, Donghwan UL; Nejati, Shiva UL et al

in 2020 IEEE 13th International Conference on Software Testing, Validation and Verification (ICST) (2020, August 05)

There is a growing body of research on developing testing techniques for Deep Neural Networks (DNN). We distinguish two general modes of testing for DNNs: Offline testing where DNNs are tested as ... [more ▼]

There is a growing body of research on developing testing techniques for Deep Neural Networks (DNN). We distinguish two general modes of testing for DNNs: Offline testing where DNNs are tested as individual units based on test datasets obtained independently from the DNNs under test, and online testing where DNNs are embedded into a specific application and tested in a close-loop mode in interaction with the application environment. In addition, we identify two sources for generating test datasets for DNNs: Datasets obtained from real-life and datasets generated by simulators. While offline testing can be used with datasets obtained from either sources, online testing is largely confined to using simulators since online testing within real-life applications can be time consuming, expensive and dangerous. In this paper, we study the following two important questions aiming to compare test datasets and testing modes for DNNs: First, can we use simulator-generated data as a reliable substitute to real-world data for the purpose of DNN testing? Second, how do online and offline testing results differ and complement each other? Though these questions are generally relevant to all autonomous systems, we study them in the context of automated driving systems where, as study subjects, we use DNNs automating end-to-end control of cars' steering actuators. Our results show that simulator-generated datasets are able to yield DNN prediction errors that are similar to those obtained by testing DNNs with real-life datasets. Further, offline testing is more optimistic than online testing as many safety violations identified by online testing could not be identified by offline testing, while large prediction errors generated by offline testing always led to severe safety violations detectable by online testing. [less ▲]

Detailed reference viewed: 226 (60 UL)