Reference : A Mixture of Generative Models Strategy Helps Humans Generalize across Tasks
E-prints/Working papers : Already available on another site
Social & behavioral sciences, psychology : Neurosciences & behavior
A Mixture of Generative Models Strategy Helps Humans Generalize across Tasks
Herce Castañón, Santiago [> >]
Cardoso-Leite, Pedro mailto [University of Luxembourg > Faculty of Humanities, Education and Social Sciences (FHSE) > Department of Behavioural and Cognitive Sciences (DBCS)]
Altarelli, Irene [> >]
Green, C. Shawn [> >]
Schrater, Paul [> >]
Bavelier, Daphne [> >]
[en] What role do generative models play in generalization of learning in humans? Our novel multi-task prediction paradigm—where participants complete four sequence learning tasks, each being a different instance of a common generative family—allows the separate study of within-task learning (i.e., finding the solution to each of the tasks), and across-task learning (i.e., learning a task differently because of past experiences). The very first responses participants make in each task are not yet affected by within-task learning and thus reflect their priors. Our results show that these priors change across successive tasks, increasingly resembling the underlying generative family. We conceptualize multi-task learning as arising from a mixture-of-generative-models learning strategy, whereby participants simultaneously entertain multiple candidate models which compete against each other to explain the experienced sequences. This framework predicts specific error patterns, as well as a gating mechanism for learning, both of which are observed in the data.
FnR ; FNR11242114 > Pedro Cardoso-leite > DIGILEARN > Scientifically Validated Digital Learning Environments > 01/06/2017 > 31/05/2022 > 2016

There is no file associated with this reference.

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.