References of "Cardoso-Leite, Pedro 50027113"
     in
Bookmark and Share    
Peer Reviewed
See detailCogPonder: Towards a Computational Framework of General Cognitive Control
Ansarinia, Morteza UL; Cardoso-Leite, Pedro UL

Poster (2023, August)

Current computational models of cognitive control exhibit notable limitations. In machine learning, artificial agents are now capable of performing complex tasks but often ignore critical constraints such ... [more ▼]

Current computational models of cognitive control exhibit notable limitations. In machine learning, artificial agents are now capable of performing complex tasks but often ignore critical constraints such as resource limitations and how long it takes for the agent to make decisions and act. Conversely, cognitive control models in psychology are limited in their ability to tackle complex tasks (e.g., play video games) or generalize across a battery of simple cognitive tests. Here we introduce CogPonder, a flexible, differentiable, cognitive control framework that is inspired by the Test-Operate-Test-Exit (TOTE) architecture in psychology and the PonderNet framework in machine learning. CogPonder functionally decouples the act of control from the controlled processes by introducing a controller that acts as a wrapper around any end-to-end deep learning model and decides when to terminate processing and output a response, thus producing both a response and response time. Our experiments show that CogPonder effectively learns from data to generate behavior that closely resembles human responses and response times in two classic cognitive tasks. This work demonstrates the value of this new computational framework and offers promising new research prospects for both psychological and computer sciences. [less ▲]

Detailed reference viewed: 78 (20 UL)
Peer Reviewed
See detailThe impact of cognitive characteristics and image-based semantic embeddings on item difficulty
Inostroza Fernandez, Pamela Isabel UL; Michels, Michael Andreas UL; Hornung, Caroline UL et al

Scientific Conference (2023, April 14)

Today’s educational field has a tremendous hunger for valid and psychometrically sound items to reliably track and model students’ learning processes. Educational large-scale assessments, formative ... [more ▼]

Today’s educational field has a tremendous hunger for valid and psychometrically sound items to reliably track and model students’ learning processes. Educational large-scale assessments, formative classroom assessment, and lately, digital learning platforms require a constant stream of high-quality, and unbiased items. However, traditional development of test items ties up a significant amount of time from subject matter experts, pedagogues and psychometricians and might not be suited anymore to nowadays demands. Salvation is sought in automatic item generation (AIG) which provides the possibility of generating multiple items within a short period of time based on the development of cognitively sound item templates by using algorithms (Gierl, Lay & Tanygin, 2021). Using images or other pictorial elements in math assessment – e.g. TIMSS (Trends in International Mathematics and Science (TIMSS, Mullis et al 2009) and Programme for International Student Assessment (PISA, OECD 2013) – is a prominent way to present mathematical tasks. Research on using images in text items show ambiguous results depending on their function and perception (Hoogland et al., 2018; Lindner et al. 2018; Lindner 2020). Thus, despite the high importance, effects of image-based semantic embeddings and their potential interplay with cognitive characteristics of items are hardly studied. The use of image-based semantic embeddings instead of mainly text-based items will increase though, especially in contexts with highly heterogeneous student language backgrounds. The present study psychometrically analyses cognitive item models that were developed by a team of national subject matter experts and psychometricians and then used for algorithmically producing items for the mathematical domain of numbers & operations for Grades 1, 3, and 5 of the Luxembourgish school system. Each item model was administered in 6 experimentally varied versions to investigate the impact of a) the context the mathematical problem was presented in, and b) problem characteristics which cognitive psychology identified to influence the problem solving process. Based on samples from Grade 1 (n = 5963), Grade 3 (n = 5527), and Grade 5 (n = 5291) collected within the annual Épreuves standardisées, this design allows for evaluating whether psychometric characteristics of produced items per model are a) stable, b) can be predicted by problem characteristics, and c) are unbiased towards subgroups of students (known to be disadvantaged in the Luxembourgish school system). The developed cognitive models worked flawlessly as base for generating item instances. Out of 348 generated items, all passed ÉpStan quality criteria which correspond to standard IRT quality criteria (rit > .25; outfit >1.2). All 24 cognitive models could be fully identified either by cognitive aspects alone, or a mixture of cognitive aspects and semantic embeddings. One model could be fully described by different embeddings used. Approximately half of the cognitive models could fully explain all generated and administered items from these models, i.e. no outliers were identified. This remained constant over all grades. With the exemption of one cognitive model, we could identify those cognitive factors that determined item difficulty. These factors included well known aspects, such as, inverse ordering, tie or order effects in additions, number range, odd or even numbers, borrowing/ carry over effects or number of elements to be added. Especially in Grade 1, the chosen semantic embedding the problem was presented in impacted item difficulty in most models (80%). This clearly decreased in Grades 3, and 5 pointing to older students’ higher ability to focus on the content of mathematical problems. Each identified factor was analyzed in terms of subgroup differences and about half of the models were affected by such effects. Gender had the most impact, followed by self-concept and socioeconomic status. Interestingly those differences were mostly found for cognitive factors (23) and less for factors related to the embedding (6). In sum, results are truly promising and show that item development based on cognitive models not only provides the opportunity to apply automatic item generation but to also create item pools with at least approximately known item difficulty. Thus, the majority of developed cognitive models in this study could be used to generate a huge number of items (> 10.000.000) for the domain of numbers & operations without the need for expensive field-trials. A necessary precondition for this is the consideration of the semantic embedding the problems are presented in, especially in lower Grades. It also has to be stated that modeling in Grade 1 was more challenging due to unforeseen interactions and transfer effects between items. We will end our presentation by discussing lessons learned from models where prediction was less successful and highlighting differences between the Grades. [less ▲]

Detailed reference viewed: 88 (19 UL)
Peer Reviewed
See detailValidation and Psychometric Analysis of 32 cognitive item models spanning Grades 1 to 7 in the mathematical domain of numbers & operations
Michels, Michael Andreas UL; Hornung, Caroline UL; Gamo, Sylvie UL et al

Scientific Conference (2022, November)

Today’s educational field has a tremendous hunger for valid and psychometrically sound items to reliably track and model students’ learning processes. Educational large-scale assessments, formative ... [more ▼]

Today’s educational field has a tremendous hunger for valid and psychometrically sound items to reliably track and model students’ learning processes. Educational large-scale assessments, formative classroom assessment, and lately, digital learning platforms require a constant stream of high-quality, and unbiased items. However, traditional development of test items ties up a significant amount of time from subject matter experts, pedagogues and psychometricians and might not be suited anymore to nowadays demands. Salvation is sought in automatic item generation (AIG) which provides the possibility of generating multiple items within a short period of time based on the development of cognitively sound item templates by using algorithms (Gierl & Haladyna, 2013; Gierl et al., 2015). The present study psychometrically analyses 35 cognitive item models that were developed by a team of national subject matter experts and psychometricians and then used for algorithmically producing items for the mathematical domain of numbers & shapes for Grades 1, 3, 5, and 7 of the Luxembourgish school system. Each item model was administered in 6 experimentally varied versions to investigate the impact of a) the context the mathematical problem was presented in, and b) problem characteristics which cognitive psychology identified to influence the problem solving process. Based on samples from Grade 1 (n = 5963), Grade 3 (n = 5527), Grade 5 (n = 5291), and Grade 7 (n = 3018) collected within the annual Épreuves standardisées, this design allows for evaluating whether psychometric characteristics of produced items per model are a) stable, b) can be predicted by problem characteristics, and c) are unbiased towards subgroups of students (known to be disadvantaged in the Luxembourgish school system). After item calibration using the 1-PL model, each cognitive model was analyzed in-depth by descriptive comparisons of resulting IRT parameters, and the estimation of manipulated problem characteristics’ impact on item difficulty by using the linear logistic test model (LLTM, Fischer, 1972). Results are truly promising and show negligible effects of different problem contexts on item difficulty and reasonably stable effects of altered problem characteristics. Thus, the majority of developed cognitive models could be used to generate a huge number of items (> 10.000.000) for the domain of numbers & operations with known psychometric properties without the need for expensive field-trials. We end with discussing lessons learned from item difficulty prediction per model and highlighting differences between the Grades. References: Fischer, G. H. (1973). The linear logistic test model as an instrument in educational research. Acta Psychologica, 36, 359-374. Gierl, M. J., & Haladyna, T. M. (Eds.). (2013). Automatic item generation: Theory and practice. New York, NY: Routledge. Gierl, M. J., Lai, H., Hogan, J., & Matovinovic, D. (2015). A Method for Generating Educational Test Items That Are Aligned to the Common Core State Standards. Journal of Applied Testing Technology, 16(1), 1–18. [less ▲]

Detailed reference viewed: 192 (9 UL)
See detailLinking Theories and Methods in Cognitive Sciences via Joint Embedding of the Scientific Literature: The Example of Cognitive Control
Ansarinia, Morteza UL; Schrater, Paul; Cardoso-Leite, Pedro UL

E-print/Working paper (2022)

Traditionally, theory and practice of Cognitive Control are linked via literature reviews by human domain experts. This approach, however, is inadequate to track the ever-growing literature. It may also ... [more ▼]

Traditionally, theory and practice of Cognitive Control are linked via literature reviews by human domain experts. This approach, however, is inadequate to track the ever-growing literature. It may also be biased, and yield redundancies and confusion. Here we present an alternative approach. We performed automated text analyses on a large body of scientific texts to create a joint representation of tasks and constructs. More specifically, 385,705 scientific abstracts were first mapped into an embedding space using a transformers-based language model. Document embeddings were then used to identify a task-construct graph embedding that grounds constructs on tasks and supports nuanced meaning of the constructs by taking advantage of constrained random walks in the graph. This joint task-construct graph embedding, can be queried to generate task batteries targeting specific constructs, may reveal knowledge gaps in the literature, and inspire new tasks and novel hypotheses. [less ▲]

Detailed reference viewed: 46 (3 UL)
Full Text
Peer Reviewed
See detailCogEnv: A Reinforcement Learning Environment for Cognitive Tests
Ansarinia, Morteza UL; Clocher, Brice UL; Defossez, Aurélien et al

in 2022 Conference on Cognitive Computational Neuroscience (2022)

Understanding human cognition involves developing computational models that mimic and possibly explain behavior; these are models that “act” like humans and produce similar outputs when facing the same ... [more ▼]

Understanding human cognition involves developing computational models that mimic and possibly explain behavior; these are models that “act” like humans and produce similar outputs when facing the same inputs. To facilitate the development of such models and ultimately further our understanding of the human mind we created CogEnv: a reinforcement learning environment where artificial agents interact with and learn to perform cognitive tests and can then be directly compared to humans. By leveraging CogEnv, cognitive and AI scientists can join efforts to better understand human cognition: the relative performance profiles of human and artificial agents may provide new insights on the computational basis of human cognition and on what human-like abilities artificial agents may lack. [less ▲]

Detailed reference viewed: 26 (2 UL)
Full Text
See detailDigitalisation du diagnostic pédagogique : De l’évolution à la révolution
Fischbach, Antoine UL; Greiff, Samuel UL; Cardoso-Leite, Pedro UL et al

in LUCET; SCRIPT (Eds.) Rapport national sur l’éducation au Luxembourg 2021 (2021)

Detailed reference viewed: 31 (1 UL)
Peer Reviewed
See detailTraining Cognition with Video Games
Cardoso-Leite, Pedro UL; Ansarinia, Morteza UL; Schmück, Emmanuel UL et al

in Cohen Kadosh, Kathrin (Ed.) The Oxford Handbook of Developmental Cognitive Neuroscience (2021)

This chapter reviews the behavioral and neuroimaging scientific literature on the cognitive consequences of playing various genres of video games. The available research highlights that not all video ... [more ▼]

This chapter reviews the behavioral and neuroimaging scientific literature on the cognitive consequences of playing various genres of video games. The available research highlights that not all video games have similar cognitive impact; action video games as defined by first- and third-person shooter games have been associated with greater cognitive enhancement, especially when it comes to top-down attention, than puzzle or life-simulation games. This state of affairs suggests specific game mechanics need to be embodied in a video game for it to enhance cognition. These hypothesized game mechanics are reviewed; yet, the authors note that the advent of more complex, hybrid, video games poses new research challenges and call for a more systematic assessment of how specific video game mechanics relate to cognitive enhancement. [less ▲]

Detailed reference viewed: 323 (18 UL)
Full Text
See detailDigitalisierung der pädagogischen Diagnostik: Von Evolution zu Revolution
Fischbach, Antoine UL; Greiff, Samuel UL; Cardoso-Leite, Pedro UL et al

in LUCET; SCRIPT (Eds.) Nationaler Bildungsbericht Luxemburg 2021 (2021)

Detailed reference viewed: 37 (2 UL)
See detailA Mixture of Generative Models Strategy Helps Humans Generalize across Tasks
Herce Castañón, Santiago; Cardoso-Leite, Pedro UL; Altarelli, Irene et al

E-print/Working paper (2021)

What role do generative models play in generalization of learning in humans? Our novel multi-task prediction paradigm—where participants complete four sequence learning tasks, each being a different ... [more ▼]

What role do generative models play in generalization of learning in humans? Our novel multi-task prediction paradigm—where participants complete four sequence learning tasks, each being a different instance of a common generative family—allows the separate study of within-task learning (i.e., finding the solution to each of the tasks), and across-task learning (i.e., learning a task differently because of past experiences). The very first responses participants make in each task are not yet affected by within-task learning and thus reflect their priors. Our results show that these priors change across successive tasks, increasingly resembling the underlying generative family. We conceptualize multi-task learning as arising from a mixture-of-generative-models learning strategy, whereby participants simultaneously entertain multiple candidate models which compete against each other to explain the experienced sequences. This framework predicts specific error patterns, as well as a gating mechanism for learning, both of which are observed in the data. [less ▲]

Detailed reference viewed: 71 (6 UL)
Peer Reviewed
See detailTackling educational inequalities using school effectiveness measures
Levy, Jessica UL; Mussack, Dominic UL; Brunner, Martin et al

Scientific Conference (2020, November 11)

Detailed reference viewed: 94 (12 UL)
Peer Reviewed
See detailGames for enhancing cognitive abilities
Cardoso-Leite, Pedro UL; Joessel, Augustin; Bavelier, Daphne

in Plass, Jan; Mayer, Richard E; Homer, Bruce D (Eds.) Handbook of Game-based Learning (2020)

Detailed reference viewed: 91 (9 UL)
Full Text
Peer Reviewed
See detailContrasting Classical and Machine Learning Approaches in the Estimation of Value-Added Scores in Large-Scale Educational Data
Levy, Jessica UL; Mussack, Dominic UL; Brunner, Martin et al

in Frontiers in Psychology (2020), 11

There is no consensus on which statistical model estimates school value-added (VA) most accurately. To date, the two most common statistical models used for the calculation of VA scores are two classical ... [more ▼]

There is no consensus on which statistical model estimates school value-added (VA) most accurately. To date, the two most common statistical models used for the calculation of VA scores are two classical methods: linear regression and multilevel models. These models have the advantage of being relatively transparent and thus understandable for most researchers and practitioners. However, these statistical models are bound to certain assumptions (e.g., linearity) that might limit their prediction accuracy. Machine learning methods, which have yielded spectacular results in numerous fields, may be a valuable alternative to these classical models. Although big data is not new in general, it is relatively new in the realm of social sciences and education. New types of data require new data analytical approaches. Such techniques have already evolved in fields with a long tradition in crunching big data (e.g., gene technology). The objective of the present paper is to competently apply these “imported” techniques to education data, more precisely VA scores, and assess when and how they can extend or replace the classical psychometrics toolbox. The different models include linear and non-linear methods and extend classical models with the most commonly used machine learning methods (i.e., random forest, neural networks, support vector machines, and boosting). We used representative data of 3,026 students in 153 schools who took part in the standardized achievement tests of the Luxembourg School Monitoring Program in grades 1 and 3. Multilevel models outperformed classical linear and polynomial regressions, as well as different machine learning models. However, it could be observed that across all schools, school VA scores from different model types correlated highly. Yet, the percentage of disagreements as compared to multilevel models was not trivial and real-life implications for individual schools may still be dramatic depending on the model type used. Implications of these results and possible ethical concerns regarding the use of machine learning methods for decision-making in education are discussed. [less ▲]

Detailed reference viewed: 159 (16 UL)
See detailMedia use, attention, mental health and academic performance among 8 to 12 year old children
Cardoso-Leite, Pedro UL; Buchard, Albert; Tissieres, Isabel et al

E-print/Working paper (2020)

Detailed reference viewed: 84 (4 UL)
See detailThe Structure of Behavioral Data
Defossez, Aurélien; Ansarinia, Morteza UL; Clocher, Brice UL et al

E-print/Working paper (2020)

For more than a century, scientists have been collecting behavioral data--an increasing fraction of which is now being publicly shared so other researchers can reuse them to replicate, integrate or extend ... [more ▼]

For more than a century, scientists have been collecting behavioral data--an increasing fraction of which is now being publicly shared so other researchers can reuse them to replicate, integrate or extend past results. Although behavioral data is fundamental to many scientific fields, there is currently no widely adopted standard for formatting, naming, organizing, describing or sharing such data. This lack of standardization is a major bottleneck for scientific progress. Not only does it prevent the effective reuse of data, it also affects how behavioral data in general are processed, as non-standard data calls for custom-made data analysis code and prevents the development of efficient tools. To address this problem, we develop the Behaverse Data Model (BDM), a standard for structuring behavioral data. Here we focus on major concepts in behavioral data, leaving further details and developments to the project's website (https://behaverse.github.io/data-model/). [less ▲]

Detailed reference viewed: 94 (9 UL)
Peer Reviewed
See detailA Formal Framework for Structured N-Back Stimuli Sequences
Ansarinia, Morteza UL; Mussack, Dominic UL; Schrater, Paul et al

Scientific Conference (2019, September 15)

Detailed reference viewed: 37 (3 UL)
Full Text
Peer Reviewed
See detailPrinciples underlying the design of a cognitive training game as a research framework
Schmück, Emmanuel UL; Flemming, Rory; Schrater, Paul et al

in 2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games) (2019)

Action video games have great potential as cognitive training instruments for their data collection efficiency over standard testing, their natural motive power, and as they have demonstrated benefits for ... [more ▼]

Action video games have great potential as cognitive training instruments for their data collection efficiency over standard testing, their natural motive power, and as they have demonstrated benefits for broad aspects of cognition. However, commercial video games do not allow researchers full control over games' unique features and parameters while presently available scientific games violate key criteria, generally lack appeal, and do not collect enough data for principled exploration of the game design space. To capitalize on the benefits of action video games and facilitate a systematic, scientific exploration of video games and cognition, we propose the Cognitive Training Game Framework (CTGF). The CTGF addresses criteria that we believe are important for gamifying an experimental environment, such as modularity, accessibility, adaptivity, and variety. By offering the potential to collect large data sets and to systematically explore scientific hypotheses in a controlled environment, the resulting framework will make significant contributions to cognitive training research. [less ▲]

Detailed reference viewed: 39 (5 UL)
Peer Reviewed
See detailA Multi-Objective Optimization Algorithm to Generate Unbiased Stimuli Sequences for Cognitive Tasks
Ansarinia, Morteza UL; Mussack, Dominic; Schrater, Paul et al

Poster (2019)

Cognitive scientists want to ensure that particular cognitive tasks target particular cognitive functions that can be mapped to stable neural markers. Numerous cognitive tasks, like the n-back, involve ... [more ▼]

Cognitive scientists want to ensure that particular cognitive tasks target particular cognitive functions that can be mapped to stable neural markers. Numerous cognitive tasks, like the n-back, involve generating sequence of trials which satisfy certain statistical properties.The common approach to generate these sequences however lacks a theoretical framework and induces unintentional structure in the sequences which affects both behavioral performance and might bias the people’s cognitive strategies when completing a task. For example, people might exploit local properties in a random sequence in their decision making process. We argue that optimized experimental design requires cognitive tasks to be served by stimulus sequence generators that satisfy multiple constraints, both at the global and at the local structures of the sequence and that these sequence properties need to be systematically incorporated in the behavioral data analysis pipeline. We then develop a framework to reformulate the sequence generation process as a compositional soft constraint satisfaction problem and offer a multi-objective, genetic-algorithm-based method to generate controlled sequences under behavioral and neural constraints. This approach provides a systematic and coherent framework to handle stimulus sequences which in turn will impact the insights that can be gained from the behavioral and neural data collected on people performing cognitive tasks using those sequences. [less ▲]

Detailed reference viewed: 20 (0 UL)