Article (Périodiques scientifiques)
Automatic Generation of Acceptance Test Cases from Use Case Specifications: an NLP-based Approach
WANG, Chunhui; PASTORE, Fabrizio; Göknil, Arda et al.
2022In IEEE Transactions on Software Engineering, 48 (2), p. 585 - 616
Peer reviewed
 

Documents


Texte intégral
UMTG_TSE_2020.pdf
Postprint Auteur (2.46 MB)
Télécharger

0098-5589 (c) 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.


Tous les documents dans ORBilu sont protégés par une licence d'utilisation.

Envoyer vers



Détails



Mots-clés :
System Test Case Generation; Use Case Specifications; Semantic Role Labeling; Natural Language Processing; Acceptance Testing
Résumé :
[en] Acceptance testing is a validation activity performed to ensure the conformance of software systems with respect to their functional requirements. In safety critical systems, it plays a crucial role since it is enforced by software standards, which mandate that each requirement be validated by such testing in a clearly traceable manner. Test engineers need to identify all the representative test execution scenarios from requirements, determine the runtime conditions that trigger these scenarios, and finally provide the input data that satisfy these conditions. Given that requirements specifications are typically large and often provided in natural language (e.g., use case specifications), the generation of acceptance test cases tends to be expensive and error-prone. In this paper, we present Use Case Modeling for System-level, Acceptance Tests Generation (UMTG), an approach that supports the generation of executable, system-level, acceptance test cases from requirements specifications in natural language, with the goal of reducing the manual effort required to generate test cases and ensuring requirements coverage. More specifically, UMTG automates the generation of acceptance test cases based on use case specifications and a domain model for the system under test, which are commonly produced in many development environments. Unlike existing approaches, it does not impose strong restrictions on the expressiveness of use case specifications. We rely on recent advances in natural language processing to automatically identify test scenarios and to generate formal constraints that capture conditions triggering the execution of the scenarios, thus enabling the generation of test data. In two industrial case studies, UMTG automatically and correctly translated 95% of the use case specification steps into formal constraints required for test data generation; furthermore, it generated test cases that exercise not only all the test scenarios manually implemented by experts, but also some critical scenarios not previously considered.
Centre de recherche :
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > Software Verification and Validation Lab (SVV Lab)
Disciplines :
Sciences informatiques
Auteur, co-auteur :
WANG, Chunhui ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT)
PASTORE, Fabrizio  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT)
Göknil, Arda ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT)
BRIAND, Lionel ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT)
Co-auteurs externes :
no
Langue du document :
Anglais
Titre :
Automatic Generation of Acceptance Test Cases from Use Case Specifications: an NLP-based Approach
Date de publication/diffusion :
01 février 2022
Titre du périodique :
IEEE Transactions on Software Engineering
ISSN :
0098-5589
Maison d'édition :
Institute of Electrical and Electronics Engineers, New York, Etats-Unis - New York
Volume/Tome :
48
Fascicule/Saison :
2
Pagination :
585 - 616
Peer reviewed :
Peer reviewed
Focus Area :
Security, Reliability and Trust
Projet européen :
H2020 - 694277 - TUNE - Testing the Untestable: Model Testing of Complex Software-Intensive Systems
Organisme subsidiant :
CE - Commission Européenne
European Union
Disponible sur ORBilu :
depuis le 02 juin 2020

Statistiques


Nombre de vues
433 (dont 60 Unilu)
Nombre de téléchargements
600 (dont 27 Unilu)

citations Scopus®
 
54
citations Scopus®
sans auto-citations
54
citations OpenAlex
 
71
citations WoS
 
35

Bibliographie


Publications similaires



Contacter ORBilu