Communication publiée dans un ouvrage (Colloques, congrès, conférences scientifiques et actes)
AI-based Question Answering Assistance for Analyzing Natural-language Requirements
EZZINI, Saad; ABUALHAIJA, Sallam; ARORA, Chetan et al.
2023In Proceedings of the 45th International Conference on Software Engineering (ICSE'23), Melbourne 14-20 May 2023
Peer reviewed
 

Documents


Texte intégral
ICSE23.pdf
Preprint Auteur (1.21 MB)
Télécharger

Tous les documents dans ORBilu sont protégés par une licence d'utilisation.

Envoyer vers



Détails



Mots-clés :
Natural-language Requirements; Question Answering (QA); Language Models; Natural Language Processing (NLP); Natural Language Generation (NLG); BERT; T5
Résumé :
[en] Abstract—By virtue of being prevalently written in natural language (NL), requirements are prone to various defects, e.g., inconsistency and incompleteness. As such, requirements are frequently subject to quality assurance processes. These processes, when carried out entirely manually, are tedious and may further overlook important quality issues due to time and budget pressures. In this paper, we propose QAssist – a question-answering (QA) approach that provides automated assistance to stakeholders, including requirements engineers, during the analysis of NL requirements. Posing a question and getting an instant answer is beneficial in various quality-assurance scenarios, e.g., incompleteness detection. Answering requirements-related questions automatically is challenging since the scope of the search for answers can go beyond the given requirements specification. To that end, QAssist provides support for mining external domain-knowledge resources. Our work is one of the first initiatives to bring together QA and external domain knowledge for addressing requirements engineering challenges. We evaluate QAssist on a dataset covering three application domains and containing a total of 387 question-answer pairs. We experiment with state-of-the-art QA methods, based primarily on recent large-scale language models. In our empirical study, QAssist localizes the answer to a question to three passages within the requirements specification and within the external domain-knowledge resource with an average recall of 90.1% and 96.5%, respectively. QAssist extracts the actual answer to the posed question with an average accuracy of 84.2%. Index Terms—Natural-language Requirements, Question Answering (QA), Language Models, Natural Language Processing (NLP), Natural Language Generation (NLG), BERT, T5.
Centre de recherche :
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > SVV - Software Verification and Validation
Disciplines :
Sciences informatiques
Auteur, co-auteur :
EZZINI, Saad ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SVV
ABUALHAIJA, Sallam  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SVV
ARORA, Chetan ;  Deakin University ; Monash University
SABETZADEH, Mehrdad ;  University of Ottawa > School of Electrical Engineering and Computer Science
Co-auteurs externes :
yes
Langue du document :
Anglais
Titre :
AI-based Question Answering Assistance for Analyzing Natural-language Requirements
Date de publication/diffusion :
mai 2023
Nom de la manifestation :
45th International Conference on Software Engineering
Date de la manifestation :
from 14-05-2023 to 20-05-2022
Manifestation à portée :
International
Titre de l'ouvrage principal :
Proceedings of the 45th International Conference on Software Engineering (ICSE'23), Melbourne 14-20 May 2023
Maison d'édition :
IEEE Press
Peer reviewed :
Peer reviewed
Projet FnR :
FNR12632261 - Early Quality Assurance Of Critical Systems, 2018 (01/01/2019-31/12/2021) - Mehrdad Sabetzadeh
Organisme subsidiant :
FNR - Fonds National de la Recherche
Disponible sur ORBilu :
depuis le 13 janvier 2023

Statistiques


Nombre de vues
322 (dont 30 Unilu)
Nombre de téléchargements
1711 (dont 20 Unilu)

citations Scopus®
 
20
citations Scopus®
sans auto-citations
15
citations OpenAlex
 
23

Bibliographie


Publications similaires



Contacter ORBilu