[en] More than a decade of research on (model-based) automatic item generation
(Gierl et al., 2012; Gierl et al., 2023) has passed and although research has come far,
the underlying technology and following implications are still not fully understood,
leaving plenty of aspects being under-researched. Meanwhile, items developed by
(agnostic) generative AI (LavergheFa Jr, A., & Licato, J., 2023) seem to be the new
solution to the time intense and expensive development of test items (Kosh et al.,
2018). However, such generated items – despite being cost effective, lack traceability
of item components (e.g the stem, the question, distractors, etc. ) endangering
principles of construct validity. In this presentation, we make the case for not
dropping model-based automatic item generation too early by demonstrating and
discussing the automatic item generator Auto.Math which is built on
psychometrically tested cognitive models.
These models were provided by the large, multilingual item pools of the
Luxembourg’s national school monitoring program (Épreuves Standardisées) which
use the national education curriculum as guidance for their item development.
Building on the needs for this program and its ever-growing demands for new items
the Auto.Math was built.
A major feature that distinguishes Auto.Math from others, and especially
from AI based models, Is the theoretical framework based on empirically and psy
chometrically validated data, which allows for the differentiation of certain diffi
culty levels. This means that all the information entered, the creation process
through to the finished item and its attributes is theory-based, transparent, and thus
can be traced.
We’ll discuss fields of application, among them addressing the training needs
of pupils, particularly in those areas where national school monitoring programs
have found shortcomings. Further testing and validation of the system will be
necessary before it can be consider using it for individual assessment purposes.
Disciplines :
Education & enseignement
Auteur, co-auteur :
BERNARD, Steve ; University of Luxembourg > Faculty of Humanities, Education and Social Sciences (FHSE) > LUCET
RATHMACHER, Yannick ; University of Luxembourg > Faculty of Humanities, Education and Social Sciences > LUCET > Team Sonja UGEN
KINIF, Pierrick Sophian G ; University of Luxembourg > Faculty of Humanities, Education and Social Sciences > LUCET > Team Philipp SONNLEITNER
KELLER, Ulrich ; University of Luxembourg > Faculty of Humanities, Education and Social Sciences (FHSE) > LUCET
SONNLEITNER, Philipp ✱; University of Luxembourg > Faculty of Humanities, Education and Social Sciences (FHSE) > LUCET
✱ Ces auteurs ont contribué de façon équivalente à la publication.
Co-auteurs externes :
no
Langue du document :
Anglais
Titre :
Automatic math item generator “autoMATH”: Bridging the gap between tradition and AI?
Date de publication/diffusion :
03 juillet 2024
Nom de la manifestation :
Iternational Test Commission (ITC) Conference 2024
Lieu de la manifestation :
Granada, Espagne
Date de la manifestation :
from 02 to 05 July 2024
Manifestation à portée :
International
Peer reviewed :
Peer reviewed
Focus Area :
Educational Sciences
Projet FnR :
FNR13650128 - Fairness Of Latest Innovations In Item And Test Development In Mathematics, 2019 (01/09/2020-31/08/2023) - Philipp Sonnleitner