Paper published in a book (Scientific congresses, symposiums and conference proceedings)
A DSL for Testing LLMs for Fairness and Bias
Morales, Sergio; Clarisó, Robert; CABOT, Jordi
2024In Proceedings - MODELS 2024: ACM/IEEE 27th International Conference on Model Driven Engineering Languages and Systems
Peer reviewed
 

Files


Full Text
A_DSL_for_Testing_LLMs_for_Fairness_and_Bias___MODELS__24.pdf
Author postprint (960.78 kB)
Download

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
Bias; Domain-Specific Language; Ethics; Large Language Models; Model-Driven Engineering; Red Teaming; Testing; Development teams; Domains specific languages; Enhanced software; Ethical concerns; Language model; Large language model; Model-driven Engineering; Red teaming; Software-systems; Modeling and Simulation
Abstract :
[en] Large language models (LLMs) are increasingly integrated into software systems to enhance them with generative AI capabilities. But LLMs may reflect a biased behavior, resulting in systems that could discriminate against gender, age or ethnicity, among other ethical concerns. Society and upcoming regulations will force companies and development teams to ensure their AI-enhanced software is ethically fair. To facilitate such ethical assessment, we propose LangBiTe, a model-driven solution to specify ethical requirements, and customize and automate the testing of ethical biases in LLMs. The evaluation can raise awareness on the biases of the LLM-based components of the system and/or trigger a change in the LLM of choice based on the requirements of that particular application. The model-driven approach makes both the requirements specification and the test generation platform-independent, and provides end-to-end traceability between the requirements and their assessment. We have implemented an open-source tool set, available on GitHub, to support the application of our approach.
Disciplines :
Computer science
Author, co-author :
Morales, Sergio ;  Universitat Oberta de Catalunya, Barcelona, Spain
Clarisó, Robert ;  Universitat Oberta de Catalunya, Barcelona, Spain
CABOT, Jordi  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > PI Cabot
External co-authors :
yes
Language :
English
Title :
A DSL for Testing LLMs for Fairness and Bias
Publication date :
22 September 2024
Event name :
Proceedings of the ACM/IEEE 27th International Conference on Model Driven Engineering Languages and Systems
Event place :
Linz, Aut
Event date :
22-09-2024 => 27-09-2024
Audience :
International
Main work title :
Proceedings - MODELS 2024: ACM/IEEE 27th International Conference on Model Driven Engineering Languages and Systems
Publisher :
Association for Computing Machinery, Inc
ISBN/EAN :
9798400705045
Peer reviewed :
Peer reviewed
FnR Project :
FNR16544475 - Better Smart Software Faster (Besser) - An Intelligent Low-code Infrastructure For Smart Software, 2020 (01/01/2022-...) - Jordi Cabot
Funding text :
This work has been partially funded by the AIDOaRt project (ECSEL Joint Undertaking, grant agreement 101007350); the research network RED2022-134647-T (MCIN/AEI/10.13039/501100011033); the Luxembourg National Research Fund (FNR) PEARL program (grant agreement 16544475); and the Spanish government (PID2020-114615RB-I00/AEI/10.13039/501100011033, project LOCOSS).
Available on ORBilu :
since 25 November 2024

Statistics


Number of views
85 (2 by Unilu)
Number of downloads
48 (1 by Unilu)

Scopus citations®
 
9
Scopus citations®
without self-citations
5
OpenCitations
 
0
OpenAlex citations
 
8
WoS citations
 
5

Bibliography


Similar publications



Contact ORBilu