References of "Levy, Jessica 50008801"
     in
Bookmark and Share    
Full Text
See detailTertium non datur: Various aspects of value-added (VA) models used as measures of educational effectiveness
Levy, Jessica UL

Doctoral thesis (2020)

Value-added (VA) models are used as measures of educational effectiveness which aim to find the “value” that has been added by teachers or schools to students’ achievement, independent of students’ ... [more ▼]

Value-added (VA) models are used as measures of educational effectiveness which aim to find the “value” that has been added by teachers or schools to students’ achievement, independent of students’ backgrounds. Statistically speaking, teacher or school VA scores are calculated as the part of an outcome variable that cannot be explained by the covariates that are in the VA model (i.e., the residual). Teachers or schools are classified as effective (or ineffective) if they have a positive (or negative) effect on students’ achievement compared to a previously specified norm value. Although VA models have gained popularity in recent years, there is a lack of consensus concerning various aspects of VA scores. The present dissertation aims at shedding light on these aspects, including the state of the art of VA research in the international literature, covariate choice, and model selection for the estimation of VA scores. In a first step, a systematic literature review was conducted, in which 370 studies from 26 countries were classified, focusing on methodological issues (Study 1 of the present dissertation). Results indicated no consensus concerning the applied statistical model type (the majority applied a linear regression, followed by multilevel models). Concerning the covariate choice, most studies used prior achievement as a covariate, cognitive and/or motivational student data were hardly considered, and there was no consensus on the in- or exclusion of students’ background variables. Based on these findings, it was suggested that VA models are better suited to improve the quality of teaching than for accountability and decision-making purposes. Secondly, based on one of the open questions resulting from Study 1 (i.e., covariate choice), the aim of Study 2 was to systematically compare different covariate combinations in the estimation of school VA models. Based on longitudinal data from primary school students participating in the Luxembourg School Monitoring Programme in Grades 1 and 3, three covariate sets were found to be essential when calculating school VA scores with math or language achievement as dependent variables: prior language achievement, prior math achievement, and students’ sociodemographic and sociocultural background. However, the evaluation of individual schools’ effectiveness varied widely depending on the covariate set that was chosen, casting further doubt on the use of VA scores for accountability purposes. Thirdly, the aim of Study 3 was to investigate statistical model selection, as Study 1 showed no consensus on which model types are most suitable for the estimation of VA scores, with the majority of studies applying linear regression or multilevel models. These classical linear models, along with nonlinear models and different types of machine learning models were systematically compared to each other. Covariates were kept constant (based on the results from Study 2) across models. Multilevel models led to the most accurate prediction of students’ achievement. However, as school VA scores varied depending on specific model choices and as these results can be only generalized for a Luxembourgish sample, it was suggested for future research that the model selection process should be made transparent and should include different specifications in order to obtain ranges of potential VA scores. In conclusion, all three studies imply that the application of VA models for decision-making and accountability should be critically discussed and that VA scores should not be used as the only measure for accountability or high-stakes decisions. In addition, it can be concluded that VA scores are more suitable for informative purposes. Thus, the findings from the present dissertation prepare the ground for future research, where schools with stable high VA scores can be part of further investigations (both qualitatively and quantitatively) to study their pedagogical strategies and learn from them. [less ▲]

Detailed reference viewed: 70 (6 UL)
Peer Reviewed
See detailTackling educational inequalities using school effectiveness measures
Levy, Jessica UL; Mussack, Dominic UL; Brunner, Martin et al

Scientific Conference (2020, November 11)

Detailed reference viewed: 50 (6 UL)
Full Text
Peer Reviewed
See detailContrasting Classical and Machine Learning Approaches in the Estimation of Value-Added Scores in Large-Scale Educational Data
Levy, Jessica UL; Mussack, Dominic UL; Brunner, Martin et al

in Frontiers in Psychology (2020), 11

There is no consensus on which statistical model estimates school value-added (VA) most accurately. To date, the two most common statistical models used for the calculation of VA scores are two classical ... [more ▼]

There is no consensus on which statistical model estimates school value-added (VA) most accurately. To date, the two most common statistical models used for the calculation of VA scores are two classical methods: linear regression and multilevel models. These models have the advantage of being relatively transparent and thus understandable for most researchers and practitioners. However, these statistical models are bound to certain assumptions (e.g., linearity) that might limit their prediction accuracy. Machine learning methods, which have yielded spectacular results in numerous fields, may be a valuable alternative to these classical models. Although big data is not new in general, it is relatively new in the realm of social sciences and education. New types of data require new data analytical approaches. Such techniques have already evolved in fields with a long tradition in crunching big data (e.g., gene technology). The objective of the present paper is to competently apply these “imported” techniques to education data, more precisely VA scores, and assess when and how they can extend or replace the classical psychometrics toolbox. The different models include linear and non-linear methods and extend classical models with the most commonly used machine learning methods (i.e., random forest, neural networks, support vector machines, and boosting). We used representative data of 3,026 students in 153 schools who took part in the standardized achievement tests of the Luxembourg School Monitoring Program in grades 1 and 3. Multilevel models outperformed classical linear and polynomial regressions, as well as different machine learning models. However, it could be observed that across all schools, school VA scores from different model types correlated highly. Yet, the percentage of disagreements as compared to multilevel models was not trivial and real-life implications for individual schools may still be dramatic depending on the model type used. Implications of these results and possible ethical concerns regarding the use of machine learning methods for decision-making in education are discussed. [less ▲]

Detailed reference viewed: 56 (4 UL)
Peer Reviewed
See detailSimilarities and differences of value-added scores from models with different covariates: A cluster analysis
Levy, Jessica UL; Brunner, Martin; Keller, Ulrich UL et al

Scientific Conference (2019, November 06)

Detailed reference viewed: 56 (4 UL)
Peer Reviewed
See detailValue-added models: To what extent do estimates of school effectiveness depend on the selection of covariates?
Levy, Jessica UL; Brunner, Martin; Keller, Ulrich UL et al

Scientific Conference (2019, September)

Detailed reference viewed: 64 (4 UL)
Peer Reviewed
See detailValue-added modeling in primary school: What covariates to include?
Levy, Jessica UL; Brunner, Martin; Keller, Ulrich UL et al

Scientific Conference (2019, August)

Detailed reference viewed: 102 (7 UL)
Full Text
Peer Reviewed
See detailMethodological Issues in Value-Added Modeling: An International Review from 26 Countries
Levy, Jessica UL; Brunner, Martin; Keller, Ulrich UL et al

in Educational Assessment, Evaluation and Accountability (2019), 31(3), 257-287

Value-added (VA) modeling can be used to quantify teacher and school effectiveness by estimating the effect of pedagogical actions on students’ achievement. It is gaining increasing importance in ... [more ▼]

Value-added (VA) modeling can be used to quantify teacher and school effectiveness by estimating the effect of pedagogical actions on students’ achievement. It is gaining increasing importance in educational evaluation, teacher accountability, and high-stakes decisions. We analyzed 370 empirical studies on VA modeling, focusing on modeling and methodological issues to identify key factors for improvement. The studies stemmed from 26 countries (68% from the USA). Most studies applied linear regression or multilevel models. Most studies (i.e., 85%) included prior achievement as a covariate, but only 2% included noncognitive predictors of achievement (e.g., personality or affective student variables). Fifty-five percent of the studies did not apply statistical adjustments (e.g., shrinkage) to increase precision in effectiveness estimates, and 88% included no model diagnostics. We conclude that research on VA modeling can be significantly enhanced regarding the inclusion of covariates, model adjustment and diagnostics, and the clarity and transparency of reporting. [less ▲]

Detailed reference viewed: 181 (28 UL)
Peer Reviewed
See detailThe use of value-added models for the identification of schools that perform “against the odds”
Levy, Jessica UL; Brunner, Martin; Keller, Ulrich UL et al

Poster (2019, July)

Value-added (VA) modeling aims to quantify the effect of pedagogical actions on students’ achievement, independent of students’ backgrounds. VA modeling is primarily used for accountability and high ... [more ▼]

Value-added (VA) modeling aims to quantify the effect of pedagogical actions on students’ achievement, independent of students’ backgrounds. VA modeling is primarily used for accountability and high-stakes decisions. To date, there seems to be no consensus concerning the calculation of VA models. Our study aims to systematically analyze and compare different school VA models by using longitudinal large-scale data emerging from the Luxembourg School Monitoring Programme. Regarding the model covariates, first findings indicate the importance of language (i.e., language(s) spoken at home and prior language achievement) in VA models with either language or math achievement as a dependent variable, with the highest amount of explained variance in VA models for language. Concerning the congruence of different VA approaches, we found high correlations between school VA scores from the different models, but also high ranges between VA scores for single schools. We conclude that VA models should be used with caution and with awareness of the differences that may arise from methodological choices. Finally, we discuss the idea that VA models could be used for the identification of schools that perform “against the odds”, especially for those schools that have positive VA scores over several years. [less ▲]

Detailed reference viewed: 64 (4 UL)
Peer Reviewed
See detailExploration of Different School Value-Added Models in a Highly Heterogeneous Educational Context
Levy, Jessica UL; Brunner, Martin; Keller, Ulrich UL et al

Scientific Conference (2019, April)

Detailed reference viewed: 88 (14 UL)
Peer Reviewed
See detailModéliser la « valeur ajoutée » en éducation primaire et secondaire : 674 publications en revue
Levy, Jessica UL; Gamo, Sylvie UL; Keller, Ulrich UL et al

Scientific Conference (2018, January)

L’approche statistique du type de « valeur ajoutée » (« value added ») a comme but de quantifier l’effet des acteurs pédagogiques sur la performance des élèves, indépendamment de leur origine (p. ex ... [more ▼]

L’approche statistique du type de « valeur ajoutée » (« value added ») a comme but de quantifier l’effet des acteurs pédagogiques sur la performance des élèves, indépendamment de leur origine (p. ex. Braun, 2005), c’est-à-dire de déterminer la valeur dans la performance de l’élève du fait qu’il étudie avec tel professeur ou /et qu’il soit dans telle école. Ces indices de valeur ajoutée une fois déterminés sont souvent utilisés pour prendre des décisions de reddition de compte (« accountability » ; p.ex. Sanders, 2000) L’idée est de faire une évaluation standardisée de la qualité des enseignants ou des écoles à travers l’évolution des résultats des élèves. Même si les valeurs ajoutées sont devenues plus populaires durant ces dernières années, il n’y a pas de consensus concernant la méthode pour les calculer, ni sur l’intégration de variables explicatives (p. ex. Newton et al., 2010). Le but de notre étude est de faire une revue de littérature concernant les valeurs ajoutées en éducation primaire et secondaire. Pour ce faire, nous avons utilisé les bases de données ERIC, Scopus, PsycINFO et Psyndex et nous avons analysé et classifié rigoureusement 674 études de 32 pays différents. La moitié des études recensées concerne les valeurs ajoutées au niveau des enseignants et les autres concernent celles au niveau des écoles ou directeurs. 370 études ont utilisé des données empiriques pour calculer des indices de valeur ajoutée. Dans un certain nombre d’études, les variables utilisées sont précisées, mais dans approximativement 15% des publications, le modèle statistique utilisé n’est pas spécifié. La plupart des études ont utilisé la performance des années précédentes des élèves comme prédicteur ; en revanche, des variables cognitives ou motivationnelles des élèves n’ont presque jamais été prises en considération. Cette revue de littérature permet de souligner, en vue des enjeux politiques importants des valeurs ajoutées, qu’il est nécessaire d’avoir plus de transparence, rigueur et consensus, surtout sur le plan méthodologique. [less ▲]

Detailed reference viewed: 184 (25 UL)
Peer Reviewed
See detailValue-Added Modelling in Primary and Secondary School: An Integrative Review of 674 Publications
Levy, Jessica UL; Keller, Ulrich UL; Brunner, Martin et al

Scientific Conference (2017, December)

Value-added (VA) modelling aims to quantify the effect of pedagogical actions on students’ achievement, independent of students’ backgrounds (e.g., [1]); in other words, VA strives to model the added ... [more ▼]

Value-added (VA) modelling aims to quantify the effect of pedagogical actions on students’ achievement, independent of students’ backgrounds (e.g., [1]); in other words, VA strives to model the added value of teaching. VA is typically used for teacher and/or school accountability (e.g., [2]). Although, VA models have gained popularity in recent years—a substantial increase of publications is to be observed over the last decade—, there is no consensus on how to calculate VA, nor is there a consensus whether and which covariates should be included in the statistical models (e.g., [3]). The aim of the present study is to conduct a to date non-existent integrative review on VA modelling in primary and secondary education. Starting with an exhaustive literature research in the ERIC, Scopus, PsycINFO, and Psyndex databases, we reviewed and thoroughly classified 674 VA publications from 32 different countries. Half of the studies investigated VA models at teacher level; the remaining looked at school or principal level. 370 studies used empirical data to calculate VA models. Most of these studies explained their covariates, but approximately 15% did not specify the model. Most studies used prior achievement as a covariate, but cognitive and/or motivational student data were almost never taken into consideration. Moreover, most of the studies did not adjust for methodological issues such as missing data or measurement error. To conclude, given the high relevance of VA—it is primarily used for high-stakes decisions— more transparency, rigor and consensus are needed, especially concerning methodological details. References [1] Braun, H. I. (2005). Using student progress to evaluate teachers: A primer on value-added models. Princeton, NJ: Educational Testing Service. [2] Sanders, W. L. (2000). Value-added assessment from student achievement data: Opportunities and hurdles. Journal of Personnel Evaluation in Education, 14(4), 329–339. [3] Newton, X., Darling-Hammond, L., Haertel, E., & Thomas, E. (2010). Value-added modeling of teacher effectiveness: An exploration of stability across models and contexts. Education Policy Analysis Archives, 18(23). [less ▲]

Detailed reference viewed: 121 (24 UL)