References of "Sabetzadeh, Mehrdad 50002966"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailAn Empirical Study on the Potential Usefulness of Domain Models for Completeness Checking of Requirements
Arora, Chetan UL; Sabetzadeh, Mehrdad UL; Briand, Lionel UL

in Empirical Software Engineering (2019), 24(4), 25092539

[Context] Domain modeling is a common strategy for mitigating incompleteness in requirements. While the benefits of domain models for checking the completeness of requirements are anecdotally known, these ... [more ▼]

[Context] Domain modeling is a common strategy for mitigating incompleteness in requirements. While the benefits of domain models for checking the completeness of requirements are anecdotally known, these benefits have never been evaluated systematically. [Objective] We empirically examine the potential usefulness of domain models for detecting incompleteness in natural-language requirements. We focus on requirements written as “shall”- style statements and domain models captured using UML class diagrams. [Methods] Through a randomized simulation process, we analyze the sensitivity of domain models to omissions in requirements. Sensitivity is a measure of whether a domain model contains information that can lead to the discovery of requirements omissions. Our empirical research method is case study research in an industrial setting. [Results and Conclusions] We have experts construct domain models in three distinct industry domains. We then report on how sensitive the resulting models are to simulated omissions in requirements. We observe that domain models exhibit near-linear sensitivity to both unspecified (i.e., missing) and under-specified requirements (i.e., requirements whose details are incomplete). The level of sensitivity is more than four times higher for unspecified requirements than under-specified ones. These results provide empirical evidence that domain models provide useful cues for checking the completeness of natural-language requirements. Further studies remain necessary to ascertain whether analysts are able to effectively exploit these cues for incompleteness detection. [less ▲]

Detailed reference viewed: 265 (69 UL)
Full Text
Peer Reviewed
See detailBridging the Gap between Requirements Modeling and Behavior-driven Development
Alferez, Mauricio UL; Pastore, Fabrizio UL; Sabetzadeh, Mehrdad UL et al

Scientific Conference (2019, June 20)

Acceptance criteria (AC) are implementation agnostic conditions that a system must meet to be consistent with its requirements and be accepted by its stakeholders. Each acceptance criterion is typically ... [more ▼]

Acceptance criteria (AC) are implementation agnostic conditions that a system must meet to be consistent with its requirements and be accepted by its stakeholders. Each acceptance criterion is typically expressed as a natural-language statement with a clear pass or fail outcome. Writing AC is a tedious and error-prone activity, especially when the requirements specifications evolve and there are different analysts and testing teams involved. Analysts and testers must iterate multiple times to ensure that AC are understandable and feasible, and accurately address the most important requirements and workflows of the system being developed. In many cases, analysts express requirements through models, along with natural language, typically in some variant of the UML. AC must then be derived by developers and testers from such models. In this paper, we bridge the gap between requirements models and AC by providing a UML-based modeling methodology and an automated solution to generate AC. We target AC in the form of Behavioral Specifications in the context of Behavioral-Driven Development (BDD), a widely used agile practice in many application domains. More specially we target the well-known Gherkin language to express AC, which then can be used to generate executable test cases. We evaluate our modeling methodology and AC generation solution through an industrial case study in the financial domain. Our results suggest that (1) our methodology is feasible to apply in practice, and (2) the additional modeling effort required by our methodology is outweighed by the benefits the methodology brings in terms of automated and systematic AC generation and improved model precision. [less ▲]

Detailed reference viewed: 275 (70 UL)
Full Text
Peer Reviewed
See detailAn Active Learning Approach for Improving the Accuracy of Automated Domain Model Extraction
Arora, Chetan UL; Sabetzadeh, Mehrdad UL; Nejati, Shiva UL et al

in ACM Transactions on Software Engineering and Methodology (2019), 28(1),

Domain models are a useful vehicle for making the interpretation and elaboration of natural-language requirements more precise. Advances in natural language processing (NLP) have made it possible to ... [more ▼]

Domain models are a useful vehicle for making the interpretation and elaboration of natural-language requirements more precise. Advances in natural language processing (NLP) have made it possible to automatically extract from requirements most of the information that is relevant to domain model construction. However, alongside the relevant information, NLP extracts from requirements a significant amount of information that is superfluous, i.e., not relevant to the domain model. Our objective in this article is to develop automated assistance for filtering the superfluous information extracted by NLP during domain model extraction. To this end, we devise an active-learning-based approach that iteratively learns from analysts’ feedback over the relevance and superfluousness of the extracted domain model elements, and uses this feedback to provide recommendations for filtering superfluous elements. We empirically evaluate our approach over three industrial case studies. Our results indicate that, once trained, our approach automatically detects an average of ≈ 45% of the superfluous elements with a precision of ≈ 96%. Since precision is very high, the automatic recommendations made by our approach are trustworthy. Consequently, analysts can dispose of a considerable fraction – nearly half – of the superfluous elements with minimal manual work. The results are particularly promising, as they should be considered in light of the non-negligible subjectivity that is inherently tied to the notion of relevance. [less ▲]

Detailed reference viewed: 474 (94 UL)
Full Text
Peer Reviewed
See detailA Machine Learning-Based Approach for Demarcating Requirements in Textual Specifications
Abualhaija, Sallam UL; Arora, Chetan UL; Sabetzadeh, Mehrdad UL et al

in 27th IEEE International Requirements Engineering Conference (RE'19) (2019)

A simple but important task during the analysis of a textual requirements specification is to determine which statements in the specification represent requirements. In principle, by following suitable ... [more ▼]

A simple but important task during the analysis of a textual requirements specification is to determine which statements in the specification represent requirements. In principle, by following suitable writing and markup conventions, one can provide an immediate and unequivocal demarcation of requirements at the time a specification is being developed. However, neither the presence nor a fully accurate enforcement of such conventions is guaranteed. The result is that, in many practical situations, analysts end up resorting to after-the-fact reviews for sifting requirements from other material in a requirements specification. This is both tedious and time-consuming. We propose an automated approach for demarcating requirements in free-form requirements specifications. The approach, which is based on machine learning, can be applied to a wide variety of specifications in different domains and with different writing styles. We train and evaluate our approach over an independently labeled dataset comprised of 30 industrial requirements specifications. Over this dataset, our approach yields an average precision of 81.2% and an average recall of 95.7%. Compared to simple baselines that demarcate requirements based on the presence of modal verbs and identifiers, our approach leads to an average gain of 16.4% in precision and 25.5% in recall. [less ▲]

Detailed reference viewed: 256 (21 UL)
Full Text
Peer Reviewed
See detailUsing Models to Enable Compliance Checking against the GDPR: An Experience Report
Torre, Damiano UL; Soltana, Ghanem UL; Sabetzadeh, Mehrdad UL et al

in To appear in the proceeding of the IEEE / ACM 22nd International Conference on Model Driven Engineering Languages and Systems (MODELS 19) (2019)

The General Data Protection Regulation (GDPR) harmonizes data privacy laws and regulations across Europe. Through the GDPR, individuals are able to better control their personal data in the face of new ... [more ▼]

The General Data Protection Regulation (GDPR) harmonizes data privacy laws and regulations across Europe. Through the GDPR, individuals are able to better control their personal data in the face of new technological developments. While the GDPR is highly advantageous to citizens, complying with it poses major challenges for organizations that control or process personal data. Since no automated solution with broad industrial applicability currently exists for GDPR compliance checking, organizations have no choice but to perform costly manual audits to ensure compliance. In this paper, we share our experience building a UML representation of the GDPR as a first step towards the development of future automated methods for assessing compliance with the GDPR. Given that a concrete implementation of the GDPR is affected by the national laws of the EU member states, GDPR’s expanding body of case laws and other contextual information, we propose a two-tiered representation of the GDPR: a generic tier and a specialized tier. The generic tier captures the concepts and principles of the GDPR that apply to all contexts, whereas the specialized tier describes a specific tailoring of the generic tier to a given context, including the contextual variations that may impact the interpretation and application of the GDPR. We further present the challenges we faced in our modeling endeavor, the lessons we learned from it, and future directions for research. [less ▲]

Detailed reference viewed: 244 (24 UL)
Full Text
Peer Reviewed
See detailDecision Support for Security-Control Identification Using Machine Learning
Bettaieb, Seifeddine UL; Shin, Seung Yeob UL; Sabetzadeh, Mehrdad UL et al

in International Working Conference on Requirements Engineering: Foundation for Software Quality, Essen 18-21 March 2019 (2019)

[Context & Motivation] In many domains such as healthcare and banking, IT systems need to fulfill various requirements related to security. The elaboration of security requirements for a given system is ... [more ▼]

[Context & Motivation] In many domains such as healthcare and banking, IT systems need to fulfill various requirements related to security. The elaboration of security requirements for a given system is in part guided by the controls envisaged by the applicable security standards and best practices. [Problem] An important difficulty that analysts have to contend with during security requirements elaboration is sifting through a large number of security controls and determining which ones have a bearing on the security requirements for a given system. This challenge is often exacerbated by the scarce security expertise available in most organizations. [Principal ideas/results] In this paper, we develop automated decision support for the identification of security controls that are relevant to a specific system in a particular context. Our approach, which is based on machine learning, leverages historical data from security assessments performed over past systems in order to recommend security controls for a new system. We operationalize and empirically evaluate our approach using real historical data from the banking domain. Our results show that, when one excludes security controls that are rare in the historical data, our approach has an average recall of ≈ 95% and average precision of ≈ 67%. [Contribution] The high recall – indicating only a few relevant security controls are missed – combined with the reasonable level of precision – indicating that the effort required to confirm recommendations is not excessive – suggests that our approach is a useful aid to analysts for more efficiently identifying the relevant security controls, and also for decreasing the likelihood that important controls would be overlooked. [less ▲]

Detailed reference viewed: 182 (44 UL)
Full Text
Peer Reviewed
See detailA Query System for Extracting Requirements-related Information from Legal Texts
Sleimi, Amin UL; Ceci, Marcello UL; Sannier, Nicolas UL et al

in Proceedings of the 27th IEEE International Requirements Engineering Conference (RE'19) (2019)

Searching legal texts for relevant information is a complex and expensive activity. The search solutions offered by present-day legal portals are targeted primarily at legal professionals. These solutions ... [more ▼]

Searching legal texts for relevant information is a complex and expensive activity. The search solutions offered by present-day legal portals are targeted primarily at legal professionals. These solutions are not adequate for requirements analysts whose objective is to extract domain knowledge including stakeholders, rights and duties, and business processes that are relevant to legal requirements. Semantic Web technologies now enable smart search capabilities and can be exploited to help requirements analysts in elaborating legal requirements. In our previous work, we developed an automated framework for extracting semantic metadata from legal texts. In this paper, we investigate the use of our metadata extraction framework as an enabler for smart legal search with a focus on requirements engineering activities. We report on our industrial experience helping the Government of Luxembourg provide an advanced search facility over Luxembourg’s Income Tax Law. The experience shows that semantic legal metadata can be successfully exploited for answering requirements engineering-related legal queries. Our results also suggest that our conceptualization of semantic legal metadata can be further improved with new information elements and relations. [less ▲]

Detailed reference viewed: 168 (14 UL)
Full Text
Peer Reviewed
See detailHITECS: A UML Profile and Analysis Framework for Hardware-in-the-Loop Testing of Cyber Physical Systems
Shin, Seung Yeob UL; Chaouch, Karim UL; Nejati, Shiva UL et al

in Proceedings of ACM/IEEE 21st International Conference on Model Driven Engineering Languages and Systems (MODELS’18) (2018, October)

Hardware-in-the-loop (HiL) testing is an important step in the development of cyber physical systems (CPS). CPS HiL test cases manipulate hardware components, are time-consuming and their behaviors are ... [more ▼]

Hardware-in-the-loop (HiL) testing is an important step in the development of cyber physical systems (CPS). CPS HiL test cases manipulate hardware components, are time-consuming and their behaviors are impacted by the uncertainties in the CPS environment. To mitigate the risks associated with HiL testing, engineers have to ensure that (1) HiL test cases are well-behaved, i.e., they implement valid test scenarios and do not accidentally damage hardware, and (2) HiL test cases can execute within the time budget allotted to HiL testing. This paper proposes an approach to help engineers systematically specify and analyze CPS HiL test cases. Leveraging the UML profile mechanism, we develop an executable domain-specific language, HITECS, for HiL test case specification. HITECS builds on the UML Testing Profile (UTP) and the UML action language (Alf). Using HITECS, we provide analysis methods to check whether HiL test cases are well-behaved, and to estimate the execution times of these test cases before the actual HiL testing stage. We apply HITECS to an industrial case study from the satellite domain. Our results show that: (1) HITECS is feasible to use in practice; (2) HITECS helps engineers define more complete and effective well-behavedness assertions for HiL test cases, compared to when these assertions are defined without systematic guidance; (3) HITECS verifies in practical time that HiL test cases are well-behaved; and (4) HITECS accurately estimates HiL test case execution times. [less ▲]

Detailed reference viewed: 245 (66 UL)
Full Text
Peer Reviewed
See detailSoftware Engineering Research and Industry: A Symbiotic Relationship to Foster Impact
Basili, Victor; Briand, Lionel UL; Bianculli, Domenico UL et al

in IEEE Software (2018), 35(5), 44-49

Software engineering is not only an increasingly challenging endeavor that goes beyond the intellectual capabilities of any single individual engineer, but is also an intensely human one. Tools and ... [more ▼]

Software engineering is not only an increasingly challenging endeavor that goes beyond the intellectual capabilities of any single individual engineer, but is also an intensely human one. Tools and methods to develop software are employed by engineers of varied backgrounds within a large variety of organizations and application domains. As a result, the variation in challenges and practices in system requirements, architecture, and quality assurance is staggering. Human, domain and organizational factors define the context within which software engineering methodologies and technologies are to be applied and therefore the context that research needs to account for, if it is to be impactful. This paper provides an assessment of the current challenges faced by software engineering research in achieving its potential, a description of the root causes of such challenges, and a proposal for the field to move forward and become more impactful through collaborative research and innovation between public research and industry. [less ▲]

Detailed reference viewed: 155 (31 UL)
Full Text
Peer Reviewed
See detailAutomated Extraction of Semantic Legal Metadata Using Natural Language Processing
Sleimi, Amin UL; Sannier, Nicolas UL; Sabetzadeh, Mehrdad UL et al

in the 26th IEEE International Requirements Engineering Conference, Banff, Alberta, 20-24 August 2018 (2018, August)

[Context] Semantic legal metadata provides information that helps with understanding and interpreting the meaning of legal provisions. Such metadata is important for the systematic analysis of legal ... [more ▼]

[Context] Semantic legal metadata provides information that helps with understanding and interpreting the meaning of legal provisions. Such metadata is important for the systematic analysis of legal requirements. [Objectives] Our work is motivated by two observations: (1) The existing requirements engineering (RE) literature does not provide a harmonized view on the semantic metadata types that are useful for legal requirements analysis. (2) Automated support for the extraction of semantic legal metadata is scarce, and further does not exploit the full potential of natural language processing (NLP). Our objective is to take steps toward addressing these limitations. [Methods] We review and reconcile the semantic legal metadata types proposed in RE. Subsequently, we conduct a qualitative study aimed at investigating how the identified metadata types can be extracted automatically. [Results and Conclusions] We propose (1) a harmonized conceptual model for the semantic metadata types pertinent to legal requirements analysis, and (2) automated extraction rules for these metadata types based on NLP. We evaluate the extraction rules through a case study. Our results indicate that the rules generate metadata annotations with high accuracy. [less ▲]

Detailed reference viewed: 299 (48 UL)
Full Text
Peer Reviewed
See detailTest Case Prioritization for Acceptance Testing of Cyber Physical Systems: A Multi-Objective Search-Based Approach
Shin, Seung Yeob UL; Nejati, Shiva UL; Sabetzadeh, Mehrdad UL et al

in Proceedings of the ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA'18) (2018, July)

Acceptance testing validates that a system meets its requirements and determines whether it can be sufficiently trusted and put into operation. For cyber physical systems (CPS), acceptance testing is a ... [more ▼]

Acceptance testing validates that a system meets its requirements and determines whether it can be sufficiently trusted and put into operation. For cyber physical systems (CPS), acceptance testing is a hardware-in-the-loop process conducted in a (near-)operational environment. Acceptance testing of a CPS often necessitates that the test cases be prioritized, as there are usually too many scenarios to consider given time constraints. CPS acceptance testing is further complicated by the uncertainty in the environment and the impact of testing on hardware. We propose an automated test case prioritization approach for CPS acceptance testing, accounting for time budget constraints, uncertainty, and hardware damage risks. Our approach is based on multi-objective search, combined with a test case minimization algorithm that eliminates redundant operations from an ordered sequence of test cases. We evaluate our approach on a representative case study from the satellite domain. The results indicate that, compared to test cases that are prioritized manually by satellite engineers, our automated approach more than doubles the number of test cases that fit into a given time frame, while reducing to less than one third the number of operations that entail the risk of damage to key hardware components. [less ▲]

Detailed reference viewed: 313 (50 UL)
Full Text
Peer Reviewed
See detailModel-Based Simulation of Legal Policies: Framework, Tool Support, and Validation
Soltana, Ghanem UL; Sannier, Nicolas UL; Sabetzadeh, Mehrdad UL et al

in Software & Systems Modeling (2018), 17(3), 851-883

Simulation of legal policies is an important decision-support tool in domains such as taxation. The primary goal of legal policy simulation is predicting how changes in the law affect measures of interest ... [more ▼]

Simulation of legal policies is an important decision-support tool in domains such as taxation. The primary goal of legal policy simulation is predicting how changes in the law affect measures of interest, e.g., revenue. Legal policy simulation is currently implemented using a combination of spreadsheets and software code. Such a direct implementation poses a validation challenge. In particular, legal experts often lack the necessary software background to review complex spreadsheets and code. Consequently, these experts currently have no reliable means to check the correctness of simulations against the requirements envisaged by the law. A further challenge is that representative data for simulation may be unavailable, thus necessitating a data generator. A hard-coded generator is difficult to build and validate. We develop a framework for legal policy simulation that is aimed at addressing the challenges above. The framework uses models for specifying both legal policies and the probabilistic characteristics of the underlying population. We devise an automated algorithm for simulation data generation. We evaluate our framework through a case study on Luxembourg’s Tax Law. [less ▲]

Detailed reference viewed: 375 (88 UL)
Full Text
Peer Reviewed
See detailAutomated Extraction and Clustering of Requirements Glossary Terms
Arora, Chetan UL; Sabetzadeh, Mehrdad UL; Briand, Lionel UL et al

in IEEE Transactions on Software Engineering (2017), 43(10), 918-945

A glossary is an important part of any software requirements document. By making explicit the technical terms in a domain and providing definitions for them, a glossary helps mitigate imprecision and ... [more ▼]

A glossary is an important part of any software requirements document. By making explicit the technical terms in a domain and providing definitions for them, a glossary helps mitigate imprecision and ambiguity. A key step in building a glossary is to decide upon the terms to include in the glossary and to find any related terms. Doing so manually is laborious, particularly for large requirements documents. In this article, we develop an automated approach for extracting candidate glossary terms and their related terms from natural language requirements documents. Our approach differs from existing work on term extraction mainly in that it clusters the extracted terms by relevance, instead of providing a flat list of terms. We provide an automated, mathematically-based procedure for selecting the number of clusters. This procedure makes the underlying clustering algorithm transparent to users, thus alleviating the need for any user-specified parameters. To evaluate our approach, we report on three industrial case studies, as part of which we also examine the perceptions of the involved subject matter experts about the usefulness of our approach. Our evaluation notably suggests that: (1) Over requirements documents, our approach is more accurate than major generic term extraction tools. Specifically, in our case studies, our approach leads to gains of 20% or more in terms of recall when compared to existing tools, while at the same time either improving precision or leaving it virtually unchanged. And, (2) the experts involved in our case studies find the clusters generated by our approach useful as an aid for glossary construction. [less ▲]

Detailed reference viewed: 366 (79 UL)
Full Text
Peer Reviewed
See detailFrom RELAW Research to Practice: Reflections on an Ongoing Technology Transfer Project
Sannier, Nicolas UL; Sabetzadeh, Mehrdad UL; Briand, Lionel UL

in the IEEE 25th International Requirements Engineering Conference, Lisbon, Portugal, 4-8 September 2017 (2017, September)

Detailed reference viewed: 66 (11 UL)
Full Text
Peer Reviewed
See detailLegal Markup Generation in the Large: An Experience Report
Sannier, Nicolas UL; Adedjouma, Morayo UL; Sabetzadeh, Mehrdad UL et al

in the 25th International Requirements Engineering Conference (RE'17), Lisbon, 4-8 September 2017 (2017, September)

Detailed reference viewed: 179 (30 UL)
Full Text
Peer Reviewed
See detailThe Case for Context-Driven Software Engineering Research
Briand, Lionel UL; Bianculli, Domenico UL; Nejati, Shiva UL et al

in IEEE Software (2017), 34(5), 72-75

Detailed reference viewed: 321 (31 UL)
Full Text
Peer Reviewed
See detailSynthetic Data Generation for Statistical Testing
Soltana, Ghanem UL; Sabetzadeh, Mehrdad UL; Briand, Lionel UL

in 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE'17) (2017)

Usage-based statistical testing employs knowledge about the actual or anticipated usage profile of the system under test for estimating system reliability. For many systems, usage-based statistical ... [more ▼]

Usage-based statistical testing employs knowledge about the actual or anticipated usage profile of the system under test for estimating system reliability. For many systems, usage-based statistical testing involves generating synthetic test data. Such data must possess the same statistical characteristics as the actual data that the system will process during operation. Synthetic test data must further satisfy any logical validity constraints that the actual data is subject to. Targeting data-intensive systems, we propose an approach for generating synthetic test data that is both statistically representative and logically valid. The approach works by first generating a data sample that meets the desired statistical characteristics, without taking into account the logical constraints. Subsequently, the approach tweaks the generated sample to fix any logical constraint violations. The tweaking process is iterative and continuously guided toward achieving the desired statistical characteristics. We report on a realistic evaluation of the approach, where we generate a synthetic population of citizens' records for testing a public administration IT system. Results suggest that our approach is scalable and capable o [less ▲]

Detailed reference viewed: 243 (21 UL)
Full Text
Peer Reviewed
See detailAn Automated Framework for Detection and Resolution of Cross References in Legal Texts
Sannier, Nicolas UL; Adedjouma, Morayo; Sabetzadeh, Mehrdad UL et al

in Requirements Engineering (2017), 22(2), 215-237

When identifying and elaborating compliance requirements, analysts need to follow the cross references in legal texts and consider the additional information in the cited provisions. Enabling easier ... [more ▼]

When identifying and elaborating compliance requirements, analysts need to follow the cross references in legal texts and consider the additional information in the cited provisions. Enabling easier navigation and handling of cross references requires automated support for the detection of the natural language expressions used in cross references, the interpretation of cross references in their context, and the linkage of cross references to the targeted provisions. In this article, we propose an approach and tool sup- port for automated detection and resolution of cross references. The approach leverages the structure of legal texts, formalized into a schema, and a set of natural language patterns for legal cross reference expressions. These patterns were developed based on an investigation of Luxembourg’s legislation, written in French. To build confidence about their applicability beyond the context where they were observed, these patterns were validated against the Personal Health Information Protection Act (PHIPA) by the Government of Ontario, Canada, written in both French and English. We report on an empirical evaluation where we assess the accuracy and scalability of our framework over several Luxembourgish legislative texts as well as PHIPA. [less ▲]

Detailed reference viewed: 323 (65 UL)
Full Text
Peer Reviewed
See detailExtracting Domain Models from Natural-Language Requirements: Approach and Industrial Evaluation
Arora, Chetan UL; Sabetzadeh, Mehrdad UL; Briand, Lionel UL et al

in 19th International Conference on Model Driven Engineering Languages and Systems, Saint-Malo 2-7 October 2016 (2016, October)

Domain modeling is an important step in the transition from natural-language requirements to precise specifications. For large systems, building a domain model manually is laborious. Several approaches ... [more ▼]

Domain modeling is an important step in the transition from natural-language requirements to precise specifications. For large systems, building a domain model manually is laborious. Several approaches exist to assist engineers with this task, where Natural Language Processing is employed for automated extraction of domain model elements. Despite the existing approaches, important facets remain under-explored. Notably, there is limited empirical evidence about the usefulness of existing extraction rules in industry. Furthermore, important opportunities for enhancing the extraction rules are yet to be exploited. We develop a domain model extractor by bringing together existing extraction rules and proposing important enhancements. We apply our model extractor to four industrial requirements documents, reporting on the frequency of different extraction rules being applied. We conduct an expert study over one of these documents, investigating the accuracy and overall effectiveness of our domain model extractor. [less ▲]

Detailed reference viewed: 393 (54 UL)
Full Text
Peer Reviewed
See detailTesting the Untestable: Model Testing of Complex Software-Intensive Systems
Briand, Lionel UL; Nejati, Shiva UL; Sabetzadeh, Mehrdad UL et al

in Proceedings of the 38th International Conference on Software Engineering (ICSE 2016) Companion (2016, May)

Detailed reference viewed: 583 (50 UL)