Paper published on a website (Scientific congresses, symposiums and conference proceedings)
AI Assisted Domain Modeling Explainability and Traceability
SILVA MERCADO, Jonathan
2024MODELS Companion '24: International Conference on Model Driven Engineering Languages and Systems
Peer reviewed
 

Files


Full Text
AI Assisted Domain Modeling Explainability and Traceability.pdf
Publisher postprint (825.7 kB)
Download

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
Computing methodologies → Natural language processing; Artificial intelligence; • Software and its engineering → Model-driven software engineering; Traceability.
Abstract :
[en] Domain Models are abstract representations of selected elements in a domain that is created in a collaborative process between domain and modeler experts. The participants share domain knowledge to conceptualize and reason about the elements that will create the domain models. Through this exchange, a comprehensive and accurate representation of the domain is achieved, ensuring that the model captures the relevant aspects and relationships in the domain. Research in Artificial Intelligence (AI) has explored various methods to assist in the creation of domain models from text using Natural Language Processing (NLP) and Machine Learning (ML). Recent advancements with Large Language Models (LLMs) have shown that it is possible to create domain models using prompting techniques; however, the generated domain models contain errors and remain constrained by the performance of the LLM used. Despite the impressive capabilities of LLMs to create domain models, it is evident that it does not address the needs of domain and modelers experts that participate in the creation of domain models. Every AI technique has its advantages and limitations that must be integrated with human feedback in a collaboration process. Therefore, we propose an approach that incorporates human-AI collaboration supported by AI assistants that follows a dialogue approach to understand the users needs and purpose to suggest relevant models. Our proposal combines symbolic and subsymbolic AI techniques with explainability and traceability of the decisions that assist to create domain models that are relevant for the users.
Disciplines :
Computer science
Author, co-author :
SILVA MERCADO, Jonathan   ;  University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
 These authors have contributed equally to this work.
External co-authors :
no
Language :
English
Title :
AI Assisted Domain Modeling Explainability and Traceability
Publication date :
31 October 2024
Event name :
MODELS Companion '24: International Conference on Model Driven Engineering Languages and Systems
Event date :
Sun 22 - Fri 27 September 2024
Peer reviewed :
Peer reviewed
Available on ORBilu :
since 29 November 2024

Statistics


Number of views
86 (13 by Unilu)
Number of downloads
33 (8 by Unilu)

OpenCitations
 
0
OpenAlex citations
 
1

Bibliography


Similar publications



Contact ORBilu