Computer Science - Artificial Intelligence; Computer Science - Computers and Society
Résumé :
[en] When making strategic decisions, we are often confronted with overwhelming
information to process. The situation can be further complicated when some
pieces of evidence are contradicted each other or paradoxical. The challenge
then becomes how to determine which information is useful and which ones should
be eliminated. This process is known as meta-decision. Likewise, when it comes
to using Artificial Intelligence (AI) systems for strategic decision-making,
placing trust in the AI itself becomes a meta-decision, given that many AI
systems are viewed as opaque "black boxes" that process large amounts of data.
Trusting an opaque system involves deciding on the level of Trustworthy AI
(TAI). We propose a new approach to address this issue by introducing a novel
taxonomy or framework of TAI, which encompasses three crucial domains:
articulate, authentic, and basic for different levels of trust. To underpin
these domains, we create ten dimensions to measure trust:
explainability/transparency, fairness/diversity, generalizability, privacy,
data governance, safety/robustness, accountability, reproducibility,
reliability, and sustainability. We aim to use this taxonomy to conduct a
comprehensive survey and explore different TAI approaches from a strategic
decision-making perspective.
Disciplines :
Sciences informatiques
Auteur, co-auteur :
WU, Caesar (ming-wei) ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > PCOG
Lib, Yuan-Fang
BOUVRY, Pascal ; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)