Reference : Descriptors for reading: Which one(s) make the difference? |
Scientific congresses, symposiums and conference proceedings : Unpublished conference | |||
Social & behavioral sciences, psychology : Education & instruction Arts & humanities : Languages & linguistics | |||
http://hdl.handle.net/10993/23283 | |||
Descriptors for reading: Which one(s) make the difference? | |
English | |
Reichert, Monique ![]() | |
May-2015 | |
Yes | |
International | |
12th EALTA conference: Policy and Practice in Language Testing and Assessment | |
26th to 31st May 2015 | |
EALTA | |
Copenhagen | |
Denmark | |
[en] reading competence ; item difficulty ; LLTM ; cognitive validity ; descriptors | |
[en] Since 2008, and on a yearly basis, all Luxembourg 9th grade students are tested regarding their German and French reading competence. One major aim of this standardized test program is to provide information about whether pupils are able to read at the level required for their successful participation in subsequent learning contexts. All the items developed for this program are described with regard to the national educational standards and levels. However, the standards descriptions are not sufficiently precise in order to guide item developers specify the measured construct, or to help understand the differences between adjacent levels. Accordingly, the feedback provided to stakeholders in education lacks the precision that might be helpful for taking more targeted measures. In order to provide additional, theory-based descriptors of reading competence, key aspects from Kintsch’s Construction-Integration model, and Khalifa and Weir’s model of reading were explained to the language teachers involved in the item development process. The teachers were then asked to rate 33 German and 34 French reading items with known empirical item characteristics with regard to the previously explained theoretical aspects. These item ratings were in turn linked to the empirical data collected in the 2013 test program. Based on the item-attribute assignments, ideal item-response patterns could be presumed and compared to real examinees’ response patterns by using a linear logistic test modeling approach. The results from the different steps show that the new theoretical attributes can serve both as meaningful descriptors of reading competence, and as reliable predictors of item difficulty. | |
Researchers ; Professionals | |
http://hdl.handle.net/10993/23283 |
There is no file associated with this reference.
All documents in ORBilu are protected by a user license.