![]() ; ; Krieger, Florian ![]() in Intelligence (2020), 82 Scores of commonly administered intelligence tests such as figural matrices are important correlates of external criteria. However, evidence of improving intelligence test scores through practice or ... [more ▼] Scores of commonly administered intelligence tests such as figural matrices are important correlates of external criteria. However, evidence of improving intelligence test scores through practice or coaching has been reported. Moreover, information about intelligence tests is widely and easily accessible (e.g., online tutorial videos). An open research question is whether watching such a video increases figural matrices test scores and affects the correlation with other intelligence tests. In two experiments (experiment 1: N = 112 psychology students; experiment 2: N = 229 teacher-education students), students were randomly assigned to either an experimental group that watched a short video (< 14 min) explaining a set of rules underlying figural matrices or a control group that watched a task irrelevant video of comparable duration. Afterwards, both groups worked on figural matrices. Prior to watching the video, all students completed an intelligence test. Results showed (1) substantially higher figural matrices mean test scores in the experimental groups compared to the control groups (d ≥ 1.19) and (2) substantial correlations between figural matrix test scores and intelligence test scores in both the experimental and the control groups. These correlations were of comparable magnitude and did not differ between the groups (experiment 1: r ≈ .55; experiment 2: r ≈ .40). Implications of these findings are discussed. [less ▲] Detailed reference viewed: 125 (1 UL)![]() Talic, Irma ![]() Scientific Conference (2019, August 16) Detailed reference viewed: 69 (4 UL)![]() ; ; et al in Higher Education (2018), 76 Detailed reference viewed: 45 (0 UL)![]() ; Stadler, Matthias ![]() in Computers in Human Behavior (2016) Computer-based assessments of complex problem solving performance often take place in group settings like classrooms and computer laboratories. Such computer-based procedures provide an excellent ... [more ▼] Computer-based assessments of complex problem solving performance often take place in group settings like classrooms and computer laboratories. Such computer-based procedures provide an excellent opportunity to examine setting effects that might occur while participants are tested in a non-group session online at a time and place of their own choosing. For this purpose, N = 273 teacher students were randomly assigned to one of two settings: the individual online condition (n=216) or the computer laboratory group condition (n=57). Strong factorial measurement invariance was evidenced. Participants performed significantly better in the individual online condition than in the group condition (knowledge acquisition:d=0.38; knowledge application: d=0.39). The worse performance in the group setting compared to the individual setting could neither be explained by exploration time, nor by time on task. The internal experimental design validity strengthens the conclusion that setting-related differences in cognitive ability testing are not negligible but noteworthy. [less ▲] Detailed reference viewed: 156 (9 UL) |
||