[en] This paper presents findings on machine learning agent behavior prediction in a board game application developed by a group of students. The goal of this research is to create a model facilitating collaboration between a user and an AI to play together in the board game using a Human-in-the-Loop architecture. By injecting explainability, the aim is to enhance communication and understanding between the user and the AI agent. Featuring a competitive Artificial Intelligence (AI) based on the Proximal Policy Optimization model, this research explores methods to make AI decisions transparent for enhanced player understanding. Two predictive models, a Decision Tree (DT) and a Deep Learning (DL) classifier, were developed and compared. The results show that the DT model is effective for short-term predictions but limited in broader applications, while the DL classifier shows potential for long-term prediction without requiring direct access to the game's AI. This study contributes to understanding human-AI interaction in gaming and offers insights into AI decision-making processes.
Research center :
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > FINATRAX - Digital Financial Services and Cross-organizational Digital Transformations
Disciplines :
Computer science Management information systems
Author, co-author :
Damette, Nathan; UTBM, CIAD UMR 7533, Belfort, France ; FINATRAX, SnT, University of Luxembourg, Kirchberg, Luxembourg
Szymanski, Maxime; UTBM, CIAD UMR 7533, Belfort, France
Mualla, Yazan; UTBM, CIAD UMR 7533, Belfort, France
TCHAPPI HAMAN, Igor ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > FINATRAX
NAJJAR, Amro ; University of Luxembourg > Faculty of Science, Technology and Medicine > Department of Computer Science > Team Leon VAN DER TORRE ; Luxembourg Institute of Science and Technology (LIST), Esch-sur-Alzette, Luxembourg
Adda, Mehdi; Université du Québec à Rimouski, Rimouski, Canada
External co-authors :
yes
Language :
English
Title :
Forecasting Future Behavior: Agents in Board Game Strategy
Publication date :
2024
Event name :
19th International Conference on Future Networks and Communications/ 21th International Conference on Mobile Systems and Pervasive Computing/14th International Conference on Sustainable Energy Information Technology
Arrieta, A.B., Díaz-Rodríguez, N., Del Ser, J., et al., 2020. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information Fusion .
Atakishiyev, S., Salameh, M., Yao, H., Goebel, R., 2023. Explainable artificial intelligence for autonomous driving: A comprehensive overview and field guide for future research directions arXiv:2112.11561.
Das, A., Rad, P., 2020. Opportunities and challenges in explainable artificial intelligence (xai): A survey. IEEE .
Du, M., Liu, N., Hu, X., 2019. Techniques for interpretable machine learning. Commun. ACM 63, 68-77.
Mahya, P., Fürnkranz, J., 2023. An empirical comparison of interpretable models to post-hoc explanations. AI 4, 426-436.
Mualla, Y., 2020. Explaining the Behavior of Remote Robots to Humans: An Agent-based Approach. Theses. UBFC.
Mualla, Y., Kampik, T., Tchappi, I.H., Najjar, A., Galland, S., Nicolle, C., 2020a. Explainable agents as static web pages: Uav simulation example, in: Int Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems, Springer, Cham. pp. 149-154.
Mualla, Y., Najjar, A., Kampik, T., et al., 2019. Towards explainability for a civilian uav fleet management using an agent-based approach .
Mualla, Y., Tchappi, I., Kampik, T., et al., 2022. The quest of parsimonious xai: A human-agent architecture for explanation formulation. Artificial Intelligence 302, 103573.
Mualla, Y., Tchappi, I., Najjar, A., Kampik, T., Galland, S., Nicolle, C., 2020b. Human-agent explainability: an experimental case study on the filtering of explanations, in: 12th International Conference on Agents and Artificial Intelligence, Valletta, Malta, February 22-24, 2020.
Mundhenk, T.N., Chen, B.Y., Friedland, G., 2020. Efficient saliency maps for explainable ai arXiv:1911.11293.
Ribeiro, M.T., Singh, S., Guestrin, C., 2016.”why should i trust you?”: Explaining the predictions of any classifier arXiv:1602.04938.
Rong, Y., Leemann, T., Nguyen, T., et al., 2023. Towards human-centered explainable ai: A survey of user studies for model explanations .
Saarela, M., Jauhiainen, S., 2021. Comparison of feature importance measures as explanations for classification models. SN Appl Sci 3.
Saeed, W., Omlin, C., 2023. Explainable ai: A systematic meta-survey of current challenges and future opportunities. Knowl-Based Syst .
Sundararajan, M., Najmi, A., 2020. The many shapley values for model explanation 119, 9269-9278.
Swamy, V., Frej, J., Käser, T., 2023. The future of human-centric explainable artificial intelligence (xai) is not post-hoc explanations .
Tjoa, E., Guan, C., 2021. A survey on explainable artificial intelligence (xai): Toward medical xai. IEEE Transactions on Neural Networks and Learning Systems 32, 4793-4813. doi:10.1109/TNNLS.2020.3027314.
Wang, D., Churchill, E., Maes, P., Fan, X., Shneiderman, B., Shi, Y., Wang, Q., 2020. From human-human collaboration to human-ai collaboration: Designing ai systems that can work together with people, 1-6.
Wang, D., Maes, P., Ren, X., Shneiderman, B., Shi, Y., Wang, Q., 2021. Designing ai to work with or for people?, in: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, New York, NY, USA.