References of "Brunner, Martin"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailHow sensitive are the evaluations of a school's effectiveness to the selection of covariates in the applied value‑added model?
Levy, Jessica UL; Brunner, Martin; Keller, Ulrich UL et al

in Educational Assessment, Evaluation and Accountability (2022)

There is no final consensus regarding which covariates should be used (in addition to prior achievement) when estimating value-added (VA) scores to evaluate a school’s effectiveness. Therefore, we ... [more ▼]

There is no final consensus regarding which covariates should be used (in addition to prior achievement) when estimating value-added (VA) scores to evaluate a school’s effectiveness. Therefore, we examined the sensitivity of evaluations of schools’ effectiveness in math and language achievement to covariate selection in the applied VA model. Four covariate sets were systematically combined, including prior achievement from the same or different domain, sociodemographic and sociocultural background characteristics, and domain-specific achievement motivation. School VA scores were estimated using longitudinal data from the Luxembourg School Monitoring Programme with some 3600 students attending 153 primary schools in Grades 1 and 3. VA scores varied considerably, despite high correlations between VA scores based on the different sets of covariates (.66 < r < 1.00). The explained variance and consistency of school VA scores substantially improved when including prior math and prior language achievement in VA models for math and prior language achievement with sociodemographic and sociocultural background characteristics in VA models for language. These findings suggest that prior achievement in the same subject, the most commonly used covariate to date, may be insufficient to control for between-school differences in student intake when estimating school VA scores. We thus recommend using VA models with caution and applying VA scores for informative purposes rather than as a mean to base accountability decisions upon. [less ▲]

Detailed reference viewed: 16 (2 UL)
Peer Reviewed
See detailAcademic Profile Development: An Investigation of Differentiation Processes Based on Students' Achievement and Grade Level
Breit, Moritz; Brunner, Martin; Fischbach, Antoine UL et al

Scientific Conference (2022, April 21)

Academic achievement profiles affect students’ further development, i.e., by informing educational and professional choices. However, there is a lack of knowledge on the mechanisms behind the development ... [more ▼]

Academic achievement profiles affect students’ further development, i.e., by informing educational and professional choices. However, there is a lack of knowledge on the mechanisms behind the development of academic profiles. For research on cognitive ability profiles, specifically differentiation processes, statistical tools have been developed. In the present article, we transfer these methods for differentiation research to academic achievement data. We examine differentiation depending on students’ general level of achievement and grade level in a large Luxembourgish student sample. Students’ achievements in German, French, and Math were assessed within the Luxembourg school monitoring program. We found more balanced academic profiles with increasing achievement level. We further found more balanced profiles with increasing grade level and a positive interaction effect. [less ▲]

Detailed reference viewed: 67 (0 UL)
Peer Reviewed
See detailAre Value-Added Scores Stable Enough for High-Stakes Decisions?
Emslander, Valentin UL; Levy, Jessica UL; Scherer, Ronny et al

Scientific Conference (2022, March)

Theoretical Background: Can we quantify the effectiveness of a teacher or a school with a single number? Researchers in the field of value-added (VA) models may argue just that (e.g., Chetty et al., 2014 ... [more ▼]

Theoretical Background: Can we quantify the effectiveness of a teacher or a school with a single number? Researchers in the field of value-added (VA) models may argue just that (e.g., Chetty et al., 2014; Kane et al., 2013). VA models are widely used for accountability purposes in education and quantify the value a teacher or a school adds to their students’ achievement. For this purpose, these models predict achievement over time and attempt to control for factors that cannot be influenced by schools or teachers (i.e., sociodemographic & sociocultural background). Following this logic, what is left must be due to teacher or school differences (see, e.g., Braun, 2005). To utilize VA models for high-stakes decision-making (e.g., teachers’ tenure, the allocation of funding), these models would need to be highly stable over time. School-level stability over time, however, has hardly been researched at all and the resulting findings are mixed, with some studies indicating high stability of school VA scores over time (Ferrão, 2012; Thomas et al., 2007) and others reporting a lack of stability (e.g., Gorard et al., 2013; Perry, 2016). Furthermore, as there is no consensus on which variables to use as independent or dependent variables in VA models (Everson, 2017; Levy et al., 2019), the stability of VA could vary between different outcome measures (e.g., language or mathematics). If VA models lack stability over time and across outcome measures, their use as the primary information for high-stakes decision-making is in question, and the inferences drawn from them could be compromised. Questions: With these uncertainties in mind, we examine the stability of school VA model scores over time and investigate the differences between language and mathematics achievement as outcome variables. Additionally, we demonstrate the real-life implications of (in)stable VA scores for single schools and point out an alternative, more constructive use of school VA models in educational research. Method: To study the stability of VA scores on school level over time and across outcomes, we drew on a sample of 146 primary schools, using representative longitudinal data from the standardized achievement tests of the Luxembourg School Monitoring Programme (LUCET, 2021). These schools included a heterogeneous and multilingual sample of 7016 students. To determine the stability of VA scores in the subject of mathematics and in languages over time, we based our analysis on two longitudinal datasets (from 2015 to 2017 and from 2017 to 2019, respectively) and generated two VA scores per dataset, one for language and one for mathematics achievement. We further analyzed how many schools displayed stable VA scores in the respective outcomes over two years, and compared the rank correlations of VA scores between language and mathematics achievement as an outcome variable. Results and Their Significance: Only 34-38 % of the schools showed stable VA scores from grade 1 to 3 with moderate rank correlations of r = .37 with language and r = .34 with mathematics achievement. We therefore discourage using VA models as the only information for high-stakes educational decisions. Nonetheless, we argue that VA models could be employed to find genuinely effective teaching or school practices—especially in heterogeneous student populations, such as Luxembourg, in which educational disparities are an important topic already in primary school (Hoffmann et al., 2018). Consequently, we contrast the school climate and instructional quality, which might be a driver of the differences between schools with stable high vs. low VA scores. Literature Braun, H. (2005). Using student progress to evaluate teachers: A primer on value-added models. Educational Testing Service. Chetty, R., Friedman, J. N., & Rockoff, J. E. (2014). Measuring the impacts of teachers I: Evaluating bias in teacher value-added estimates. American Economic Review, 104(9), 2593–2632. https://doi.org/10.1257/aer.104.9.2593 Everson, K. C. (2017). Value-added modeling and educational accountability: Are we answering the real questions? Review of Educational Research, 87(1), 35–70. https://doi.org/10.3102/0034654316637199 Ferrão, M. E. (2012). On the stability of value added indicators. Quality & Quantity, 46(2), 627–637. https://doi.org/10.1007/s11135-010-9417-6 Gorard, S., Hordosy, R., & Siddiqui, N. (2013). How unstable are “school effects” assessed by a value-added technique? International Education Studies, 6(1), 1–9. https://doi.org/10.5539/ies.v6n1p1 Kane, T. J., McCaffrey, D. F., Miller, T., & Staiger, D. O. (2013). Have We Identified Effective Teachers? Validating Measures of Effective Teaching Using Random Assignment. Research Paper. MET Project. Bill & Melinda Gates Foundation. https://files.eric.ed.gov/fulltext/ED540959.pdf Levy, J., Brunner, M., Keller, U., & Fischbach, A. (2019). Methodological issues in value-added modeling: An international review from 26 countries. Educational Assessment, Evaluation and Accountability, 31(3), 257–287. https://doi.org/10.1007/s11092-019-09303-w LUCET. (2021). Épreuves Standardisées (ÉpStan). https://epstan.lu Perry, T. (2016). English value-added measures: Examining the limitations of school performance measurement. British Educational Research Journal, 42(6), 1056–1080. https://doi.org/10.1002/berj.3247 Thomas, S., Peng, W. J., & Gray, J. (2007). Modelling patterns of improvement over time: Value added trends in English secondary school performance across ten cohorts. Oxford Review of Education, 33(3), 261–295. https://doi.org/10.1080/03054980701366116 [less ▲]

Detailed reference viewed: 68 (4 UL)
Peer Reviewed
See detailStability of Primary School Value-Added Scores over Time: A Comparison Between Math and Language Achievement as Outcome Variables
Emslander, Valentin UL; Levy, Jessica UL; Scherer, Ronny et al

Scientific Conference (2021, November)

Value-added (VA) models are widely used for accountability purposes in education. Tracking a teacher’s or a school’s VA score over time forms oftentimes the basis for high-stakes decision-making and can ... [more ▼]

Value-added (VA) models are widely used for accountability purposes in education. Tracking a teacher’s or a school’s VA score over time forms oftentimes the basis for high-stakes decision-making and can determine whether teachers can keep their jobs or schools may receive certain funding. Despite their high-stakes application, the stability of VA scores over time has not yet been investigated for primary schools. Moreover, it is unclear whether different outcome measures (e.g., language and mathematics) may differ in their stability over time. In the present study, we aimed to clarify the stability of VA scores over time and investigate the differences across outcome variables. Furthermore, we wanted to showcase the real-life implications of (in)stable VA scores for single schools, with a focus on an informative use of VA scores rather than an evaluative way. The exploration of school VA scores in primary schools is especially relevant for heterogeneous student populations, for instance, in Luxembourg. Thus, we drew on representative longitudinal data from the standardized achievement tests of the Luxembourg School Monitoring Programme and examined the stability of school VA scores over two years in 146 schools (N = 7016 students). The overall stability, as measured by correlation coefficients, was moderate with r = .37 for VA scores in language and r = .34 for VA scores in mathematics from grade one to grade three. Real-life implications for schools will be discussed. [less ▲]

Detailed reference viewed: 78 (17 UL)
Full Text
Peer Reviewed
See detailStability of Value-Added Models: Comparing Classical and Machine Learning Approaches
Emslander, Valentin UL; Levy, Jessica UL; Scherer, Ronny et al

Scientific Conference (2021, September)

Background: What is the value that teachers or schools add to the evolution of students’ performance? Value-added (VA) modeling aims to answer this question by quantifying the effect of pedagogical ... [more ▼]

Background: What is the value that teachers or schools add to the evolution of students’ performance? Value-added (VA) modeling aims to answer this question by quantifying the effect of pedagogical actions on students’ achievement, independent of students’ backgrounds (e.g., Braun, 2005). A plethora of VA models exist, and several outcome measures are in use to estimate VA scores, yet without consensus on the model specification (Everson, 2017; Levy et al., 2019). Furthermore, it is unclear whether the most frequently used VA models (i.e., multi-level, linear regression, and random forest models) and outcome measures (i.e., language and mathematics achievement) indicate a similar stability of VA scores over time. Objectives: Drawing from the data of a highly diverse and multilingual school setting, where leveling out the influence of students’ backgrounds is of special interest, we aim to (a) clarify the stability of school VA scores over time; (b) shed light on the sensitivity toward different statistical models and outcome variables; and (c) evaluate the practical implications of (in)stable VA scores for individual schools. Method: Utilizing the representative, longitudinal data from the Luxembourg School Monitoring Programme (LUCET, 2021), we examined the stability of school VA scores. We drew on two longitudinal data sets of students who participated in the standardized achievement tests in Grade 1 in 2014 or 2016 and then again in Grade 3 two years later (i.e., 2016 and 2018, respectively), with a total of 5875 students in 146 schools. School VA scores were calculated using classical approaches (i.e., linear regression and multilevel models) and one of the most commonly used machine learning approaches in educational research (i.e., random forests). Results and Discussion: The overall stability over time across the VA models was moderate, with multilevel models showing greater stability than linear regression models and random forests. Stability differed across outcome measures and was higher for VA models with language achievement as an outcome variable as compared to those with mathematics achievement. Practical implications for schools and teachers will be discussed. [less ▲]

Detailed reference viewed: 204 (9 UL)
Peer Reviewed
See detailTackling educational inequalities using school effectiveness measures
Levy, Jessica UL; Mussack, Dominic UL; Brunner, Martin et al

Scientific Conference (2020, November 11)

Detailed reference viewed: 86 (12 UL)
Full Text
Peer Reviewed
See detailCircadian preference as a typology: Latent-class analysis of adolescents' morningness/eveningness, relation with sleep behavior, and with academic outcomes
Preckel, Franzis; Fischbach, Antoine UL; Scherrer, Vsevolod et al

in Learning and Individual Differences (2020), 78

Detailed reference viewed: 222 (31 UL)
Full Text
Peer Reviewed
See detailContrasting Classical and Machine Learning Approaches in the Estimation of Value-Added Scores in Large-Scale Educational Data
Levy, Jessica UL; Mussack, Dominic UL; Brunner, Martin et al

in Frontiers in Psychology (2020), 11

There is no consensus on which statistical model estimates school value-added (VA) most accurately. To date, the two most common statistical models used for the calculation of VA scores are two classical ... [more ▼]

There is no consensus on which statistical model estimates school value-added (VA) most accurately. To date, the two most common statistical models used for the calculation of VA scores are two classical methods: linear regression and multilevel models. These models have the advantage of being relatively transparent and thus understandable for most researchers and practitioners. However, these statistical models are bound to certain assumptions (e.g., linearity) that might limit their prediction accuracy. Machine learning methods, which have yielded spectacular results in numerous fields, may be a valuable alternative to these classical models. Although big data is not new in general, it is relatively new in the realm of social sciences and education. New types of data require new data analytical approaches. Such techniques have already evolved in fields with a long tradition in crunching big data (e.g., gene technology). The objective of the present paper is to competently apply these “imported” techniques to education data, more precisely VA scores, and assess when and how they can extend or replace the classical psychometrics toolbox. The different models include linear and non-linear methods and extend classical models with the most commonly used machine learning methods (i.e., random forest, neural networks, support vector machines, and boosting). We used representative data of 3,026 students in 153 schools who took part in the standardized achievement tests of the Luxembourg School Monitoring Program in grades 1 and 3. Multilevel models outperformed classical linear and polynomial regressions, as well as different machine learning models. However, it could be observed that across all schools, school VA scores from different model types correlated highly. Yet, the percentage of disagreements as compared to multilevel models was not trivial and real-life implications for individual schools may still be dramatic depending on the model type used. Implications of these results and possible ethical concerns regarding the use of machine learning methods for decision-making in education are discussed. [less ▲]

Detailed reference viewed: 137 (16 UL)
Peer Reviewed
See detailSimilarities and differences of value-added scores from models with different covariates: A cluster analysis
Levy, Jessica UL; Brunner, Martin; Keller, Ulrich UL et al

Scientific Conference (2019, November 06)

Detailed reference viewed: 92 (8 UL)
Peer Reviewed
See detailValue-added models: To what extent do estimates of school effectiveness depend on the selection of covariates?
Levy, Jessica UL; Brunner, Martin; Keller, Ulrich UL et al

Scientific Conference (2019, September)

Detailed reference viewed: 99 (6 UL)
Peer Reviewed
See detailValue-added modeling in primary school: What covariates to include?
Levy, Jessica UL; Brunner, Martin; Keller, Ulrich UL et al

Scientific Conference (2019, August)

Detailed reference viewed: 139 (9 UL)
Full Text
Peer Reviewed
See detailMethodological Issues in Value-Added Modeling: An International Review from 26 Countries
Levy, Jessica UL; Brunner, Martin; Keller, Ulrich UL et al

in Educational Assessment, Evaluation and Accountability (2019), 31(3), 257-287

Value-added (VA) modeling can be used to quantify teacher and school effectiveness by estimating the effect of pedagogical actions on students’ achievement. It is gaining increasing importance in ... [more ▼]

Value-added (VA) modeling can be used to quantify teacher and school effectiveness by estimating the effect of pedagogical actions on students’ achievement. It is gaining increasing importance in educational evaluation, teacher accountability, and high-stakes decisions. We analyzed 370 empirical studies on VA modeling, focusing on modeling and methodological issues to identify key factors for improvement. The studies stemmed from 26 countries (68% from the USA). Most studies applied linear regression or multilevel models. Most studies (i.e., 85%) included prior achievement as a covariate, but only 2% included noncognitive predictors of achievement (e.g., personality or affective student variables). Fifty-five percent of the studies did not apply statistical adjustments (e.g., shrinkage) to increase precision in effectiveness estimates, and 88% included no model diagnostics. We conclude that research on VA modeling can be significantly enhanced regarding the inclusion of covariates, model adjustment and diagnostics, and the clarity and transparency of reporting. [less ▲]

Detailed reference viewed: 257 (34 UL)
Peer Reviewed
See detailThe use of value-added models for the identification of schools that perform “against the odds”
Levy, Jessica UL; Brunner, Martin; Keller, Ulrich UL et al

Poster (2019, July)

Value-added (VA) modeling aims to quantify the effect of pedagogical actions on students’ achievement, independent of students’ backgrounds. VA modeling is primarily used for accountability and high ... [more ▼]

Value-added (VA) modeling aims to quantify the effect of pedagogical actions on students’ achievement, independent of students’ backgrounds. VA modeling is primarily used for accountability and high-stakes decisions. To date, there seems to be no consensus concerning the calculation of VA models. Our study aims to systematically analyze and compare different school VA models by using longitudinal large-scale data emerging from the Luxembourg School Monitoring Programme. Regarding the model covariates, first findings indicate the importance of language (i.e., language(s) spoken at home and prior language achievement) in VA models with either language or math achievement as a dependent variable, with the highest amount of explained variance in VA models for language. Concerning the congruence of different VA approaches, we found high correlations between school VA scores from the different models, but also high ranges between VA scores for single schools. We conclude that VA models should be used with caution and with awareness of the differences that may arise from methodological choices. Finally, we discuss the idea that VA models could be used for the identification of schools that perform “against the odds”, especially for those schools that have positive VA scores over several years. [less ▲]

Detailed reference viewed: 97 (8 UL)
Peer Reviewed
See detailExploration of Different School Value-Added Models in a Highly Heterogeneous Educational Context
Levy, Jessica UL; Brunner, Martin; Keller, Ulrich UL et al

Scientific Conference (2019, April)

Detailed reference viewed: 122 (16 UL)
Peer Reviewed
See detailModéliser la « valeur ajoutée » en éducation primaire et secondaire : 674 publications en revue
Levy, Jessica UL; Gamo, Sylvie UL; Keller, Ulrich UL et al

Scientific Conference (2018, January)

L’approche statistique du type de « valeur ajoutée » (« value added ») a comme but de quantifier l’effet des acteurs pédagogiques sur la performance des élèves, indépendamment de leur origine (p. ex ... [more ▼]

L’approche statistique du type de « valeur ajoutée » (« value added ») a comme but de quantifier l’effet des acteurs pédagogiques sur la performance des élèves, indépendamment de leur origine (p. ex. Braun, 2005), c’est-à-dire de déterminer la valeur dans la performance de l’élève du fait qu’il étudie avec tel professeur ou /et qu’il soit dans telle école. Ces indices de valeur ajoutée une fois déterminés sont souvent utilisés pour prendre des décisions de reddition de compte (« accountability » ; p.ex. Sanders, 2000) L’idée est de faire une évaluation standardisée de la qualité des enseignants ou des écoles à travers l’évolution des résultats des élèves. Même si les valeurs ajoutées sont devenues plus populaires durant ces dernières années, il n’y a pas de consensus concernant la méthode pour les calculer, ni sur l’intégration de variables explicatives (p. ex. Newton et al., 2010). Le but de notre étude est de faire une revue de littérature concernant les valeurs ajoutées en éducation primaire et secondaire. Pour ce faire, nous avons utilisé les bases de données ERIC, Scopus, PsycINFO et Psyndex et nous avons analysé et classifié rigoureusement 674 études de 32 pays différents. La moitié des études recensées concerne les valeurs ajoutées au niveau des enseignants et les autres concernent celles au niveau des écoles ou directeurs. 370 études ont utilisé des données empiriques pour calculer des indices de valeur ajoutée. Dans un certain nombre d’études, les variables utilisées sont précisées, mais dans approximativement 15% des publications, le modèle statistique utilisé n’est pas spécifié. La plupart des études ont utilisé la performance des années précédentes des élèves comme prédicteur ; en revanche, des variables cognitives ou motivationnelles des élèves n’ont presque jamais été prises en considération. Cette revue de littérature permet de souligner, en vue des enjeux politiques importants des valeurs ajoutées, qu’il est nécessaire d’avoir plus de transparence, rigueur et consensus, surtout sur le plan méthodologique. [less ▲]

Detailed reference viewed: 246 (27 UL)