Reference : An Empirical Study on the Potential Usefulness of Domain Models for Completeness Chec...
Scientific journals : Article
Engineering, computing & technology : Computer science
Computational Sciences
http://hdl.handle.net/10993/38815
An Empirical Study on the Potential Usefulness of Domain Models for Completeness Checking of Requirements
English
Arora, Chetan mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > >]
Sabetzadeh, Mehrdad mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > >]
Briand, Lionel mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > >]
In press
Empirical Software Engineering
Kluwer Academic Publishers
Yes (verified by ORBilu)
International
1382-3256
1573-7616
Netherlands
[en] Requirements Quality Assurance ; Requirements Completeness ; Natural-language Requirements ; Domain Modeling ; Case Study Research
[en] [Context] Domain modeling is a common strategy for mitigating incompleteness in requirements. While the benefits of domain models for checking the completeness of requirements are anecdotally known, these benefits have never been evaluated systematically. [Objective] We empirically examine the potential usefulness of domain models for detecting incompleteness in natural-language requirements. We focus on requirements written as “shall”- style statements and domain models captured using UML class diagrams. [Methods] Through a randomized simulation process, we analyze the sensitivity of domain models to omissions in requirements. Sensitivity is a measure of whether a domain model contains information that can lead to the discovery of requirements omissions. Our empirical research method is case study research in an industrial setting. [Results and Conclusions] We have experts construct domain models in three distinct industry domains. We then report on how sensitive the resulting models are to simulated omissions in requirements. We observe that domain models exhibit near-linear sensitivity to both unspecified (i.e., missing) and under-specified requirements (i.e., requirements whose details are incomplete). The level of sensitivity is more than four times higher for unspecified requirements than under-specified ones. These results provide empirical evidence that domain models provide useful cues for checking the completeness of natural-language requirements. Further studies remain necessary to ascertain whether analysts are able to effectively exploit these cues for incompleteness detection.
http://hdl.handle.net/10993/38815
H2020 ; 694277 - TUNE - Testing the Untestable: Model Testing of Complex Software-Intensive Systems
FnR ; FNR11601446 - Reconciling Natural-Language Requirements and Model-Based Specification for Effective Development of Critical Infrastructure Systems (RECONCIS)

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Open access
paper.pdfAuthor preprint2.88 MBView/Open

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.