Reference : How does usability improve computer-based knowledge assessment?
Dissertations and theses : Doctoral thesis
Social & behavioral sciences, psychology : Multidisciplinary, general & others
How does usability improve computer-based knowledge assessment?
Weinerth, Katja mailto [University of Luxembourg > Faculty of Language and Literature, Humanities, Arts and Education (FLSHASE) > Educational Measurement and Applied Cognitive Science (EMACS) >]
University of Luxembourg, ​Luxembourg, ​​Luxembourg
Docteur en Psychologie
Martin, Romain mailto
Koenig, Vincent mailto
Brunner, Martin mailto
[en] human-computer interaction ; usability ; computer-based assessment ; educational assessment ; concept map ; reliability ; media in education ; pedagogical issues
[en] There has been a major shift from paper-and-pencil towards computer-based assessments (CBAs). CBA has the potential to overcome various limitations imposed by traditional assessment approaches, mostly because CBA allows for the easier and more effective measurement of complex knowledge concepts via the use of dynamic items. Innovative item formats in CBA (dynamic or interactive multimedia items) allow the assessment of complex skills (e.g., complex problem solving), but they also tend to increase the complexity of CBA instruments because of their augmented interactivity. If the CBA is not user-friendly, the test-taker might spend more time and effort trying to understand how to interact with the system instead of focusing on the assessment task itself. The research field of human-computer interaction (HCI) shows that the usability or user-friendliness of the system affects the interaction. Usability addresses how appropriate (for a particular use) or how user-friendly a CBA instrument is. Thus, to the extent that there is any detrimental technical bias in CBA affecting the user-friendliness of a CBA, it will affect the user’s interaction with the instrument and consequently the instrument’s psychometrics.

There are certain guidelines (e.g., International Test Commission, 2005) that require usability testing to safeguard against the CBA instrument measuring skills or competencies other than those that are supposed to be measured by the CBA instrument. However, surprisingly little research has been conducted to investigate whether usability has been acknowledged in CBAs and whether researchers are aware of the impact usability might have on the assessment results. To answer these and further research questions, three studies were conducted within this present Ph.D. project. These studies focused on a specific CBA instrument: concept maps.

A concept map is a graphical illustration of a knowledge concept and can be described as a “knowledge net.” Research has shown that the concept map is a valid and reliable instrument for measuring conceptual knowledge. Concept map instruments are a good example of the increasing popularity of CBA, as technology allows test-takers to correct and improve their concept maps with ease. Test-takers and their examiners receive immediate feedback on the correctness of the concept map. Furthermore, it is a good example of new approaches to knowledge assessment, which is currently a major trend. All of this taken together makes concept maps a good proxy for new ways of assessing knowledge. Our research builds on three studies (Study I – literature-based study, Study II – laboratory study, Study III – school study) that investigated the impact of usability on computer-based knowledge assessment using concept maps.

Study I, a literature-based study, scrutinized the existing literature and answered the fundamental research question of whether and how usability has been acknowledged in CBA concept map studies. As no such literature review existed prior to the current project, a systematic literature review was conducted to shed light on the representation and relevance of usability displayed in CBA concept map studies. The literature review indicated that only 24 of 119 journal articles that assessed computer-based concept maps actually discussed the usability of the applied instrument in some way and only three of those 24 journal articles explicitly mentioned that they evaluated the usability of the applied instruments. The literature review illustrated that usability is rarely acknowledged and reported in CBA concept map studies. Our review brings to light the idea that the impact of usability, although well established in the field of HCI, has received insufficient attention in the field of educational assessment.

Study II, a laboratory study, addressed the main research question of how HCI methods can be introduced to the field of educational assessment to improve the usability of a CBA concept map instrument. With a user-centered design and development approach, which makes the user the key reference, usability testing and heuristic analyses were conducted in the usability laboratory. The approved HCI methods of usability testing and heuristic analyses were combined to evaluate and further improve a CBA concept map instrument. We applied three iterative design and re-engineering cycles that were based on the results we received from the usability testing and heuristic analyses. To verify the improvements in the usability of the CBA concept map instrument, three independent and randomly assigned groups of 30 students underwent concept map assessments using the three iteratively developed instruments (baseline (V1), further developed: V2 and V3). The results of this independent design sample study showed that the HCI methods allowed us to design and develop demonstrably usability-improved concept map instruments; they furthermore revealed that the usability improvements significantly improved the assessment outcomes.

Study III, a school study, empirically verified the impact that usability has on the psychometrics of the applied CBA instruments and on the satisfaction and performance of the test-takers. The experimental study was conducted at school, and 542 students were randomly assigned to one of the three CBA concept map instruments that showed consecutively improved usability. The performance of the test-takers who worked with the usability-improved instruments significantly increased in comparison with the baseline version. The test-takers indicated that they were more satisfied with the usability-improved instruments. Moreover, the internal consistency of the items from the baseline instrument in comparison with usability-improved instruments increased from a Cronbach’s alpha of .62 to .84.

To summarize, Study I showed that the impact of usability is rarely evaluated and discussed in CBA concept map studies. Studies II and III clearly showed that usability has a positive impact on the test-takers’ interaction with the CBA concept map instrument. Specifically, Study II illustrated and demonstrated how HCI methods could be used to achieve usability-improved CBA instruments, which in turn allow for better assessment outcomes. In addition, Study III illustrated that the psychometrics were also affected by the usability of the instrument; specifically, the reliability (measured as the internal consistency of the applied items) increased when the usability-improved instruments were used. Thus, the continuing trend towards CBAs calls for more systematic usability research to help ensure satisfactory psychometric test properties and test-takers’ satisfaction with the instrument. The studies confirmed the hypothesis that if usability is not taken into account, the assessment results may severely compromise the quality of individual diagnostics as well as educational policies and educational decisions.

There is no file associated with this reference.

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.