[en] The increasing adoption of Generative AI in knowledge-heavy jobs workflows raises questions about its impact on the human knowledge worker’s critical thinking (CT) skills and decision-making. Some people think we can mitigate the effects AI has on the
human decision-making process by creating AI-based interventions that would encourage more thoughtful decision-making. Creating such interventions, however, requires a precise definition of what critical thinking is and how to measure it. Additionally, encouraging certain modes of thinking in a controlled environment doesn’t necessarily translate to people applying them in a decision-making process. This position paper clarifies the problems stemming from both the definitional and the practical CT measurement issues, discusses what characteristics a good proxy measure of CT should have, and talks about human response to ’easy automation’.
Disciplines :
Computer science
Author, co-author :
Sergeeva, Elena ✱; MIT - Massachusetts Institute of Technology
SERGEEVA, Anastasia ✱; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > IRiSC
✱ These authors have contributed equally to this work.
External co-authors :
yes
Language :
English
Title :
THE GOAL: Misalignment between the measured metrics and the nebulous concept of critical thinking in joint decision-making
Publication date :
2025
Number of pages :
6
Event name :
2025 ACM Workshop on Human-AI Interaction for Augmented Reasoning (AIREASONING-2025-01)
The research was supported by the Luxembourg National Research Fund (REMEDIS, REgulatory and other solutions to MitigatE online DISinformation (INTER/FNRS/21/16554939)).