Unpublished conference/Abstract (Scientific congresses, symposiums and conference proceedings)
Stability of Value-Added Models: Comparing Classical and Machine Learning Approaches
Emslander, Valentin; Levy, Jessica; Scherer, Ronny et al.
2021PAEPSY 2021 Tagung der Fachgruppe Pädagogische Psychologie


Full Text
Emslander, Levy, Scherer, Brunner, & Fischbach (Sep 2021).pdf
Publisher postprint (107.75 kB)

All documents in ORBilu are protected by a user license.

Send to


Keywords :
Value-Added; School Effectiveness; Machine Learning
Abstract :
[en] Background: What is the value that teachers or schools add to the evolution of students’ performance? Value-added (VA) modeling aims to answer this question by quantifying the effect of pedagogical actions on students’ achievement, independent of students’ backgrounds (e.g., Braun, 2005). A plethora of VA models exist, and several outcome measures are in use to estimate VA scores, yet without consensus on the model specification (Everson, 2017; Levy et al., 2019). Furthermore, it is unclear whether the most frequently used VA models (i.e., multi-level, linear regression, and random forest models) and outcome measures (i.e., language and mathematics achievement) indicate a similar stability of VA scores over time. Objectives: Drawing from the data of a highly diverse and multilingual school setting, where leveling out the influence of students’ backgrounds is of special interest, we aim to (a) clarify the stability of school VA scores over time; (b) shed light on the sensitivity toward different statistical models and outcome variables; and (c) evaluate the practical implications of (in)stable VA scores for individual schools. Method: Utilizing the representative, longitudinal data from the Luxembourg School Monitoring Programme (LUCET, 2021), we examined the stability of school VA scores. We drew on two longitudinal data sets of students who participated in the standardized achievement tests in Grade 1 in 2014 or 2016 and then again in Grade 3 two years later (i.e., 2016 and 2018, respectively), with a total of 5875 students in 146 schools. School VA scores were calculated using classical approaches (i.e., linear regression and multilevel models) and one of the most commonly used machine learning approaches in educational research (i.e., random forests). Results and Discussion: The overall stability over time across the VA models was moderate, with multilevel models showing greater stability than linear regression models and random forests. Stability differed across outcome measures and was higher for VA models with language achievement as an outcome variable as compared to those with mathematics achievement. Practical implications for schools and teachers will be discussed.
Research center :
- Faculty of Language and Literature, Humanities, Arts and Education (FLSHASE) > Luxembourg Centre for Educational Testing (LUCET)
Disciplines :
Education & instruction
Author, co-author :
Emslander, Valentin  ;  University of Luxembourg > Faculty of Humanities, Education and Social Sciences (FHSE) > LUCET
Levy, Jessica ;  University of Luxembourg > Faculty of Humanities, Education and Social Sciences (FHSE) > Department of Education and Social Work (DESW)
Scherer, Ronny;  University of Oslo - UiO > Centre for Educational Measurement at the University of Oslo (CEMO), Faculty of Educational Sciences
Brunner, Martin;  University of Potsdam > Department of Education
Fischbach, Antoine  ;  University of Luxembourg > Faculty of Humanities, Education and Social Sciences (FHSE) > Department of Education and Social Work (DESW)
External co-authors :
Language :
Title :
Stability of Value-Added Models: Comparing Classical and Machine Learning Approaches
Publication date :
September 2021
Event name :
PAEPSY 2021 Tagung der Fachgruppe Pädagogische Psychologie
Event organizer :
DGPs Fachgruppe Pädagogische Psychologie
Event date :
from 14-09-2021 to 16-09-2021
Audience :
References of the abstract :
Braun, H. (2005). Using student progress to evaluate teachers: A primer on value-added models. Educational Testing Service. Everson, K. C. (2017). Value-added modeling and educational accountability: Are we answering the real questions? Review of Educational Research, 87(1), 35–70. https://doi.org/10.3102/0034654316637199 Levy, J., Brunner, M., Keller, U., & Fischbach, A. (2019). Methodological issues in value-added modeling: An international review from 26 countries. Educational Assessment, Evaluation and Accountability, 31(3), 257–287. https://doi.org/10.1007/s11092-019-09303-w LUCET. (2021). Épreuves Standardisées (ÉpStan). https://epstan.lu
Focus Area :
Educational Sciences
Name of the research project :
Systematic Identification of High "Value-Added" in Educational Contexts - SIVA
Available on ORBilu :
since 25 September 2021


Number of views
422 (15 by Unilu)
Number of downloads
158 (6 by Unilu)


Similar publications

Contact ORBilu