References of "Sonnleitner, Philipp 50003122"
     in
Bookmark and Share    
Peer Reviewed
See detailFairness as seen by students - a differentiated look at perceived assessment fairness by 7th and 9th graders in Luxembourg
Sonnleitner, Philipp UL; Inostroza Fernandez, Pamela Isabel UL; Wollschläger, Rachel UL

Scientific Conference (2022, November 10)

Assessment is probably the central factor in every educational biography: On the one hand, through direct consequences for school career decisions, on the other hand, through repercussions on each ... [more ▼]

Assessment is probably the central factor in every educational biography: On the one hand, through direct consequences for school career decisions, on the other hand, through repercussions on each student’s self-concept in the respective subject, for one's own work behavior and the perception of institutional fairness in general. A crucial factor is the subjective, perceived fairness of assessment, which has been shown to influence students' satisfaction, motivation, and attitudes toward learning (Chory-Assad, 2002; Wendorf & Alexander, 2005). The current study examines how Luxembourgish students experience fairness of assessment on the basis of representative samples of the 7iéme (N > 700 students) and 9iéme/ 5iéme (N > 2200, 35% of the total cohort) and gives a first insight into the connection with school interest and self-concept. Special attention is given to the heterogeneity of the Luxembourgish student population: the extent to which language background, socioeconomic status, and gender are related to these perceptions of fairness will be analyzed. Data was collected as part of the nationwide Épreuves standardisées in fall 2021 using the Fairness Barometer (Sonnleitner & Kovacs, 2020) - a standardized instrument to measure informational and procedural fairness in student assessment. The analyses are theoretically based on Classroom Justice Theory and educational psychology (Chory-Assad and Paulsel, 2004; Chory, 2007; Duplaga & Astani, 2010) and utilize latent variable models (SEM) to study the complex interplay between perceived assessment practices and students’ school-related motivational factors. The insights offered by this study are internationally unique in their scope and provide a first glimpse on fairness perceptions of groups of Luxembourgish students in known disadvantaged situations. Results aim to sensitize especially active teachers and educators to the central importance of assessment in schools and offer some concrete advice how to improve it. References: Chory, R. M. (2007). Enhancing student perceptions of fairness: the relationship between instructor credibility and classroom justice. Commun. Educ. 56, 89–105. doi: 10.1080/03634520600994300 Chory-Assad, R. M., and Paulsel, M. L. (2004). Classroom justice: student aggression and resistance as reactions to perceived unfairness. Commun. Educ. 53, 253–273. doi: 10.1080/0363452042000265189 Chory-Assad, R. M. (2002). Classroom justice: perceptions of fairness as a predictor of student motivation, learning, and aggression. Commun. Q. 50, 58–77. doi: 10.1080/01463370209385646 Duplaga, E. A., and Astani, M. (2010). An exploratory study of student perceptions of which classroom policies are fairest. Decision Sci. J. Innov. Educ. 8, 9–33. doi: 10.1111/j.1540-4609.2009.00241.x Sonnleitner, P., & Kovacs, C. (2020, February). Differences between students’ and teachers’ fairness perceptions: Exploring the potential of a self-administered questionnaire to improve teachers’ assessment practices. In Frontiers in Education (Vol. 5, p. 17). Frontiers Media SA. Wendorf, C. A., and Alexander, S. (2005). The influence of individual- and class-level fairness-related perceptions on student satisfaction. Contemp. Educ. Psychol. 30, 190–206. doi: 10.1016/j.cedpsych.2004.07.003 [less ▲]

Detailed reference viewed: 35 (1 UL)
Peer Reviewed
See detailUsing Diagnostic Classification Models to map first graders’ cognitive development pathways in the Luxembourgish school monitoring program: a pilot study in the domain of numbers & operations
Inostroza Fernandez, Pamela Isabel UL; Michels, Michael Andreas UL; Sonnleitner, Philipp UL

Scientific Conference (2022, November)

Educational large-scale assessments aim to evaluate school systems’ effectiveness by typically looking at aggregated levels of students’ performance. The developed assessment tools or tests are not ... [more ▼]

Educational large-scale assessments aim to evaluate school systems’ effectiveness by typically looking at aggregated levels of students’ performance. The developed assessment tools or tests are not intended or optimized to be used for diagnostic purposes on an individual level. In most cases, the underlying theoretical framework is based on national curricula and therefore too blurry for diagnostic test construction, and test length is too short to draw reliable inferences on individual level. This lack of individual information is often unsatisfying, especially for participating students and teachers who invest a considerable amount of time and effort, not to speak about the tremendous organizational work needed to realize such assessments. The question remains, if the evaluation could not be used in an optimized way to offer more differentiated information on students’ specific skills. The present study explores the potential of Diagnostic Classification Models (DCM) in this regard, since they offer crucial information for policy makers, educators, and students themselves. Instead of a ranking of, e.g., an overall mathematics ability, student mastery profiles of subskills are identified in DCM, providing a rich base for further targeted interventions and instruction (Rupp, Templin & Henson, 2010; von Davier, M., & Lee, Y. S., 2019). A prerequisite for applying such models is well-developed, and cognitively described items that map the assessed ability on a fine-grained level. In the present study, we drew on 104 items that were developed on base of detailed cognitive item models for basic Grade 1 competencies, such as counting, as well as decomposition and addition with low numbers and high numbers (Fuson, 1988, Fritz & Ricken, 2008, Krajewski & Schneider, 2009). Those items were spread over a main test plus 6 different test booklets and administered to a total of 5963 first graders within the Luxembourgish national school monitoring Épreuves standardisées. Results of this pilot study are highly promising, giving information about different student’s behaviors patterns: The final DCM was able to distinguish between different developmental stages in the domain of numbers & operations, on group, as well as on individual level. Whereas roughly 14% of students didn’t master any of the assessed competencies, 34% of students mastered all of them including addition with high numbers. The remaining 52% achieved different stages of competency development, 8% of students are classified only mastering counting, 15% of students also can master addition with low numbers, meanwhile 20% of students additionally can master decomposition, all these patterns reflect developmental models of children’s counting and concept of numbers (Fritz & Ricken, 2008; see also Braeuning et al, 2021). Information that could potentially be used to substantially enhance large-scale assessment feedback and to offer further guidance for teachers on what to focus when teaching. To conclude, the present results make a convincing case that using fine-grained cognitive models for item development and applying DCMs that are able to statistically capture these nuances in student response behavior might be worth the (substantially) increased effort. References: Braeuning, D. et al (2021)., Long-term relevance and interrelation of symbolic and non-symbolic abilities in mathematical-numerical development: Evidence from large-scale assessment data. Cognitive Development, 58, https://doi.org/10.1016/j.cogdev.2021.101008. Fritz, A., & Ricken, G. (2008). Rechenschwäche. utb GmbH. Fuson, K. C. (1988). Children's counting and concepts of number. Springer-Verlag Publishing. Rupp, A. A., Templin, J. L., & Henson, R. A. (2010). Diagnostic measurement: Theory, methods, and applications. New York, NY: Guildford Press. Von Davier, M., & Lee, Y. S. (2019). Handbook of diagnostic classification models. Cham: Springer International Publishing. [less ▲]

Detailed reference viewed: 150 (7 UL)
Peer Reviewed
See detailValidation and Psychometric Analysis of 32 cognitive item models spanning Grades 1 to 7 in the mathematical domain of numbers & operations
Michels, Michael Andreas UL; Hornung, Caroline UL; Gamo, Sylvie UL et al

Scientific Conference (2022, November)

Today’s educational field has a tremendous hunger for valid and psychometrically sound items to reliably track and model students’ learning processes. Educational large-scale assessments, formative ... [more ▼]

Today’s educational field has a tremendous hunger for valid and psychometrically sound items to reliably track and model students’ learning processes. Educational large-scale assessments, formative classroom assessment, and lately, digital learning platforms require a constant stream of high-quality, and unbiased items. However, traditional development of test items ties up a significant amount of time from subject matter experts, pedagogues and psychometricians and might not be suited anymore to nowadays demands. Salvation is sought in automatic item generation (AIG) which provides the possibility of generating multiple items within a short period of time based on the development of cognitively sound item templates by using algorithms (Gierl & Haladyna, 2013; Gierl et al., 2015). The present study psychometrically analyses 35 cognitive item models that were developed by a team of national subject matter experts and psychometricians and then used for algorithmically producing items for the mathematical domain of numbers & shapes for Grades 1, 3, 5, and 7 of the Luxembourgish school system. Each item model was administered in 6 experimentally varied versions to investigate the impact of a) the context the mathematical problem was presented in, and b) problem characteristics which cognitive psychology identified to influence the problem solving process. Based on samples from Grade 1 (n = 5963), Grade 3 (n = 5527), Grade 5 (n = 5291), and Grade 7 (n = 3018) collected within the annual Épreuves standardisées, this design allows for evaluating whether psychometric characteristics of produced items per model are a) stable, b) can be predicted by problem characteristics, and c) are unbiased towards subgroups of students (known to be disadvantaged in the Luxembourgish school system). After item calibration using the 1-PL model, each cognitive model was analyzed in-depth by descriptive comparisons of resulting IRT parameters, and the estimation of manipulated problem characteristics’ impact on item difficulty by using the linear logistic test model (LLTM, Fischer, 1972). Results are truly promising and show negligible effects of different problem contexts on item difficulty and reasonably stable effects of altered problem characteristics. Thus, the majority of developed cognitive models could be used to generate a huge number of items (> 10.000.000) for the domain of numbers & operations with known psychometric properties without the need for expensive field-trials. We end with discussing lessons learned from item difficulty prediction per model and highlighting differences between the Grades. References: Fischer, G. H. (1973). The linear logistic test model as an instrument in educational research. Acta Psychologica, 36, 359-374. Gierl, M. J., & Haladyna, T. M. (Eds.). (2013). Automatic item generation: Theory and practice. New York, NY: Routledge. Gierl, M. J., Lai, H., Hogan, J., & Matovinovic, D. (2015). A Method for Generating Educational Test Items That Are Aligned to the Common Core State Standards. Journal of Applied Testing Technology, 16(1), 1–18. [less ▲]

Detailed reference viewed: 172 (7 UL)
Full Text
See detailIst individuelle Diagnostik im Schulmonitoring möglich? Neue Methoden versprechen einen diagnostischen Mehrwert der Épreuves Standardisées
Sonnleitner, Philipp UL

in Georges, Carrie; Hoffmann, Danielle; Hornung, Caroline (Eds.) et al LEARN Newsletter - Editioun 2022 (2022)

Das durch den Fonds National de la Recherche (FNR) geförderte Forschungsprojekt FAIR-ITEMS (C19/SC/13650128) am Luxembourg Centre for Educational Testing (LUCET, lucet.uni.lu) Untersucht aktuell ... [more ▼]

Das durch den Fonds National de la Recherche (FNR) geförderte Forschungsprojekt FAIR-ITEMS (C19/SC/13650128) am Luxembourg Centre for Educational Testing (LUCET, lucet.uni.lu) Untersucht aktuell, inwieweit die ÉpStan im Bereich Mathematik zusätzliche diagnostische Informationen über die SchülerInnen liefern könnten. Da die Tests dabei mehr leisten sollen, wird zusätzliche Expertise bei der Erstellung, als auch bei der Auswertung mithilfe neuester psychometrischer Methoden angewandt. [less ▲]

Detailed reference viewed: 43 (2 UL)
Full Text
Peer Reviewed
See detailRésultats du monitoring scolaire national ÉpStan dans le contexte de la pandémie de COVID-19
Fischbach, Antoine UL; Colling, Joanne UL; Levy, Jessica UL et al

in LUCET; SCRIPT (Eds.) Rapport national sur l’éducation au Luxembourg 2021 (2021)

Detailed reference viewed: 36 (3 UL)
Full Text
Peer Reviewed
See detailBefunde aus dem nationalen Bildungsmonitoring ÉpStan vor dem Hintergrund der COVID-19- Pandemie
Fischbach, Antoine UL; Colling, Joanne UL; Levy, Jessica UL et al

in LUCET; SCRIPT (Eds.) Nationaler Bildungsbericht Luxemburg 2021 (2021)

Detailed reference viewed: 53 (21 UL)
Peer Reviewed
See detailBefunde aus dem nationalen Bildungsmonitoring ÉpStan vor dem Hintergrund der COVID-19 Pandemie (Supplement)
Fischbach, Antoine UL; Colling, Joanne UL; Levy, Jessica UL et al

in LUCET; SCRIPT (Eds.) Nationaler Bildungsbericht Luxemburg 2021 (2021)

Detailed reference viewed: 46 (10 UL)
Peer Reviewed
See detailRésultats du monitoring scolaire national ÉpStan dans le contexte de la pandémie de COVID-19 (Matériels supplémentaires)
Fischbach, Antoine UL; Colling, Joanne UL; Levy, Jessica UL et al

in LUCET; SCRIPT (Eds.) Rapport National sur l´Éducation au Luxembourg 2021 (2021)

Detailed reference viewed: 50 (12 UL)
Peer Reviewed
See detailUsing Automatic Item Generation in the context of the Épreuves Standardisées (Épstan): A pilot study on effects of altering item characteristics and semantic embeddings
Michels, Michael Andreas UL; Hornung, Caroline UL; Inostroza Fernandez, Pamela Isabel UL et al

Scientific Conference (2021, November 11)

Assessing mathematical skills in national school monitoring programs such as the Luxembourgish Épreuves Standardisées (ÉpStan) creates a constant demand of developing high-quality items that is both ... [more ▼]

Assessing mathematical skills in national school monitoring programs such as the Luxembourgish Épreuves Standardisées (ÉpStan) creates a constant demand of developing high-quality items that is both expensive and time-consuming. One approach to provide high-quality items in a more efficient way is Automatic Item Generation (AIG, Gierl, 2013). Instead of creating single items, cognitive item models form the base for an algorithmic generation of a large number of new items with supposedly identical item characteristics. The stability of item characteristics is questionable, however, when different semantic embeddings are used to present the mathematical problems (Dewolf, Van Dooren, & Verschaffel, 2017, Hoogland, et al., 2018). Given culture-specific knowledge differences in students, it is not guaranteed that illustrations showing everyday activities do not differentially impact item difficulty (Martin, et al., 2012). Moreover, the prediction of empirical item difficulties based on theoretical rationales has proved to be difficult (Leighton & Gierl, 2011). This paper presents a first attempt to better understand the impact of (a) different semantic embeddings, and (b) problem-related variations on mathematics items in grades 1 (n = 2338), 3 (n = 3835) and 5 (n = 3377) within the context of ÉpStan. In total, 30 mathematical problems were presented in up to 4 different versions, either using different but equally plausible semantic contexts or altering the problem’s content characteristics. Preliminary results of IRT-scaling and DIF-analysis reveal substantial effects of both, the embedding, as well as the problem characteristics on general item difficulties as well as on subgroup level. Further results and implications for developing mathematic items, and specifically, for using AIG in the course of Épstan will be discussed. [less ▲]

Detailed reference viewed: 68 (13 UL)
See detailImpulsvortrag: Evaluierung und Standardisierung des "Sproochentests"
Krämer, Charlotte UL; Sonnleitner, Philipp UL

Presentation (2021, October 27)

Detailed reference viewed: 55 (4 UL)
Full Text
Peer Reviewed
See detailThe factor structure of mathematical abilities in Luxembourg’s national school monitoring: Its stability over elementary school and relations to, gender, language background, and SES
Sonnleitner, Philipp UL; Hornung, Caroline UL

Scientific Conference (2021, July)

Mathematics skills are the fundament of modern societies, especially those based on a knowledge-economy. The age of digitalization renders mathematics education even more crucial since it builds the ... [more ▼]

Mathematics skills are the fundament of modern societies, especially those based on a knowledge-economy. The age of digitalization renders mathematics education even more crucial since it builds the starting point for all STEM-related fields. Consequently, mathematics is at the core of numerous educational Large-Scale Assessments on international (e.g. PISA, TIMSS) or national level (e.g. NAEP, NEPS, SNSA). Although the underlying test development frameworks are most often multi-dimensional or hierarchical, psychometric analyses usually focus on a single latent factor that represents a rather vague general mathematical ability. How and to what extent this simplification affects educational studies that rely on these data remains unclear. The present study takes Luxembourg’s national school monitoring program ÉpStan as example to tackle this question and clarify the consequences. ÉpStan’s mathematics test is conducted annually in elementary school Grades 1, 3, and 5 and is comprised of around 50 to 70 items. Since ÉpStan captures competencies of all students biyearly, each analysis will be based on the full cohort (n > 5000). First, we will investigate whether the curriculum-based test framework for mathematics can psychometrically be represented in a related (multi-dimensional) confirmatory factor model including the domains numbers & operations and space & form. This will be done in Grades 1, 3, and 5. Second, we will study the factor model’s cross-sectional stability within each Grade (over three consecutive years) and longitudinal stability between Grades. Finally, we will study the factors’ relations to students’ cognitive and sociodemographic characteristics and compare the results with correlations found using the most widely used one-dimensional model of mathematical abilities. Based on the results, we will discuss implications not only for educational studies that often uncritically make use of large-scale assessment data, but also highlight the consequences for group-level feedback that is based on such assessments. [less ▲]

Detailed reference viewed: 55 (4 UL)
Full Text
See detailDer Elefant im Klassenzimmer: Faire schulische Leistungsbeurteilung
Sonnleitner, Philipp UL

in Georges, Carrie; Hoffmann, Danielle; Hornung, Caroline (Eds.) et al LEARN Newsletter - Editioun 2021 (2021)

Detailed reference viewed: 55 (1 UL)
Full Text
See detailMatheaufgaben vom Fließband? Neues Projekt erforscht Fairness und Potenzial automatischer Aufgabengenerierung
Sonnleitner, Philipp UL

in Georges, Carrie; Hoffmann, Danielle; Hornung, Caroline (Eds.) et al LEARN Newsletter - Editioun 2021 (2021)

Detailed reference viewed: 32 (1 UL)