Reference : Stability of Value-Added Models: Comparing Classical and Machine Learning Approaches
Scientific congresses, symposiums and conference proceedings : Unpublished conference
Social & behavioral sciences, psychology : Education & instruction
Educational Sciences
Stability of Value-Added Models: Comparing Classical and Machine Learning Approaches
Emslander, Valentin mailto [University of Luxembourg > Faculty of Humanities, Education and Social Sciences (FHSE) > LUCET >]
Levy, Jessica mailto [University of Luxembourg > Faculty of Humanities, Education and Social Sciences (FHSE) > Department of Education and Social Work (DESW) >]
Scherer, Ronny mailto [University of Oslo - UiO > Centre for Educational Measurement at the University of Oslo (CEMO), Faculty of Educational Sciences]
Brunner, Martin mailto [University of Potsdam > Department of Education]
Fischbach, Antoine mailto [University of Luxembourg > Faculty of Humanities, Education and Social Sciences (FHSE) > Department of Education and Social Work (DESW) >]
Braun, H. (2005). Using student progress to evaluate teachers: A primer on value-added models. Educational Testing Service.

Everson, K. C. (2017). Value-added modeling and educational accountability: Are we answering the real questions? Review of Educational Research, 87(1), 35–70.

Levy, J., Brunner, M., Keller, U., & Fischbach, A. (2019). Methodological issues in value-added modeling: An international review from 26 countries. Educational Assessment, Evaluation and Accountability, 31(3), 257–287.

LUCET. (2021). Épreuves Standardisées (ÉpStan).
PAEPSY 2021 Tagung der Fachgruppe Pädagogische Psychologie
from 14-09-2021 to 16-09-2021
DGPs Fachgruppe Pädagogische Psychologie
[en] Value-Added ; School Effectiveness ; Machine Learning
[en] Background: What is the value that teachers or schools add to the evolution of students’ performance? Value-added (VA) modeling aims to answer this question by quantifying the effect of pedagogical actions on students’ achievement, independent of students’ backgrounds (e.g., Braun, 2005). A plethora of VA models exist, and several outcome measures are in use to estimate VA scores, yet without consensus on the model specification (Everson, 2017; Levy et al., 2019). Furthermore, it is unclear whether the most frequently used VA models (i.e., multi-level, linear regression, and random forest models) and outcome measures (i.e., language and mathematics achievement) indicate a similar stability of VA scores over time.
Objectives: Drawing from the data of a highly diverse and multilingual school setting, where leveling out the influence of students’ backgrounds is of special interest, we aim to (a) clarify the stability of school VA scores over time; (b) shed light on the sensitivity toward different statistical models and outcome variables; and (c) evaluate the practical implications of (in)stable VA scores for individual schools.
Method: Utilizing the representative, longitudinal data from the Luxembourg School Monitoring Programme (LUCET, 2021), we examined the stability of school VA scores. We drew on two longitudinal data sets of students who participated in the standardized achievement tests in Grade 1 in 2014 or 2016 and then again in Grade 3 two years later (i.e., 2016 and 2018, respectively), with a total of 5875 students in 146 schools. School VA scores were calculated using classical approaches (i.e., linear regression and multilevel models) and one of the most commonly used machine learning approaches in educational research (i.e., random forests).
Results and Discussion: The overall stability over time across the VA models was moderate, with multilevel models showing greater stability than linear regression models and random forests. Stability differed across outcome measures and was higher for VA models with language achievement as an outcome variable as compared to those with mathematics achievement. Practical implications for schools and teachers will be discussed.
Faculty of Language and Literature, Humanities, Arts and Education (FLSHASE) > Luxembourg Centre for Educational Testing (LUCET)
Systematic Identification of High "Value-Added" in Educational Contexts - SIVA

File(s) associated to this reference

Fulltext file(s):

Open access
Emslander, Levy, Scherer, Brunner, & Fischbach (Sep 2021).pdfPublisher postprint105.23 kBView/Open

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.