Abstract :
[en] Remote assessment is attractive because it lowers delivery costs, scales to large cohorts, and broadens access for learners facing geographic or financial constraints. However, shifting assessment out of supervised classrooms changes the integrity landscape: without physical oversight, personal technology and private spaces make unauthorized assistance easier and detection harder. Institutions respond with security measures, yet these can also hinder honest test-takers by increasing anxiety, eroding trust, or creating inequitable conditions, thereby undermining validity. This dissertation argues that integrity, fairness, and experience need not be unavoidable trade-offs; with appropriate design, they can be mutually reinforcing.
The thesis proposes a three-pillar framework that integrates assessment validity research with user-centered design. It evaluates interventions across (1) assessment integrity (discouraging and preventing unauthorized assistance while supporting honest behavior), (2) fair performance conditions (enabling test-takers to demonstrate true competence under reasonable and comparable circumstances), and (3) positive assessment experiences (supportive, non-punitive interactions). Rather than prioritizing security alone, the framework treats all three pillars as complementary criteria that jointly constitute comprehensive validity.
The central question is: which design principles for remote assessment interventions can simultaneously uphold integrity, ensure fair performance conditions, and foster positive assessment experiences? Three mixed-methods studies address this question across proctored and unproctored contexts. Stakeholders were engaged through interviews and expert/co-design activities to elicit constraints and requirements. Prototype interventions were then tested in large-scale randomized controlled trials in simulated online assessment environments using behavioral indicators, system-level integrity measures, and self-report measures of fairness and experience.
Study 1 examines webcam-based remote proctoring and tackles the privacy-security dilemma. It evaluates visual obfuscation techniques that reduce exposure of privacy-sensitive information while preserving cues needed for cheating detection. Experiments (N=259) comparing obfuscation of key regions (face, body, background) show that carefully designed approaches, such as replacing faces with 3D avatars, can mitigate privacy concerns while retaining security-relevant information, though cost and user acceptance remain practical barriers.
Study 2 evaluates common messaging interventions in unproctored assessments, comparing honor code reminders, warning messages, and fake surveillance prompts. Using cognitive aptitude tasks and cheating detection via behavioral monitoring (N=997) and "trap" websites, results show that brief messages reduce cheating odds by about 40% (21.2% to 13.5%) without harming performance or overall user experience; honor code reminders are most effective. Qualitative feedback also highlights limitations: ad-hoc messages often lack theoretical grounding and may elicit distrust or stress for some participants.
Study 3 addresses these limitations by developing theory-informed, motivation-centered messages. Fifteen psychological concepts were operationalized via expert workshops to create 45 supportive messages that promote engagement rather than explicit rule enforcement. In an RCT (N=1232) using an anagram task, concept-based messages reduced full cheating by 42% (33% to 19%) and increased non-cheating (53% to 63%) without negative effects on performance or experience across integrity groups. Surprisingly, messages grounded in different theories produced similar outcomes, and mechanism analyses indicate that messages influence multiple psychological states simultaneously, suggesting more complex causal pathways than standard models predict.
Across studies, results consistently show that integrity, fair performance conditions, and positive experiences do not necessarily conflict. The dissertation contributes a systematic methodology for designing and evaluating user-centered integrity interventions, validated privacy-preserving proctoring techniques, experimental infrastructures for realistic online testing and cheating detection, and an LLM-assisted workflow for generating theory-grounded messaging. These contributions are timely amid AI-driven disruption of conventional integrity measures, offering a path to remote assessments that maintain academic rigor while protecting learner welfare. It clarifies when supportive design complements, rather than replaces, necessary technical safeguards in practice.
Institution :
Unilu - University of Luxembourg [Faculty of Humanities, Education and Social Sciences], Belval, Luxembourg