![]() ; Retter, Talia ![]() in Frontiers in Systems Neuroscience (2021) Exposure to a face can produce biases in the perception of subsequent faces. Typically, these face aftereffects are studied by adapting to an individual face or category (e.g., faces of a given gender ... [more ▼] Exposure to a face can produce biases in the perception of subsequent faces. Typically, these face aftereffects are studied by adapting to an individual face or category (e.g., faces of a given gender) and can result in renormalization of perceptions such that the adapting face appears more neutral. These shifts are analogous to chromatic adaptation, where a renormalization for the average adapting color occurs. However, in color vision, adaptation can also adjust to the variance or range of colors in the distribution. We examined whether this variance or contrast adaptation also occurs for faces, using an objective EEG measure to assess response changes following adaptation. An average female face was contracted or expanded along the horizontal or vertical axis to form four images. Observers viewed a 20 s sequence of the four images presented in a fixed order at a rate of 6 Hz, while responses to the faces were recorded with EEG. A 6 Hz signal was observed over right occipito-temporal channels, indicating symmetric responses to the four images. This test sequence was repeated after 20 s adaptation to alternations between two of the faces (e.g., horizontal contracted and expanded). This adaptation resulted in an additional signal at 3 Hz, consistent with asymmetric responses to adapted and non-adapted test faces. Adapting pairs have the same mean (undistorted) as the test sequence and thus should not bias responses driven only by the mean. Instead, the results are consistent with selective adaptation to the distortion axis. A 3 Hz signal was also observed after adapting to face pairs selected to induce a mean bias (e.g., expanded vertical and expanded horizontal), and this signal was not significantly different from that observed following adaption to a single image that did not form part of the test sequence (e.g., a single image expanded both vertically and horizontally). In a further experiment, we found that this variance adaptation can also be observed behaviorally. Our results suggest that adaptation calibrates face perception not only for the average characteristics of the faces we experience but also for the gamut of faces to which we are exposed. [less ▲] Detailed reference viewed: 23 (0 UL)![]() Retter, Talia ![]() in Neuroscience (2021), 472 Establishing consistent relationships between neural activity and behavior is a challenge in human cognitive neuroscience research. We addressed this issue using variable time constraints in an oddball ... [more ▼] Establishing consistent relationships between neural activity and behavior is a challenge in human cognitive neuroscience research. We addressed this issue using variable time constraints in an oddball frequency-sweep design for visual discrimination of complex images (face exemplars). Sixteen participants viewed sequences of ascending presentation durations, from 25 to 333 ms (40–3 Hz stimulation rate) while their electroencephalogram (EEG) was recorded. Throughout each sequence, the same unfamiliar face picture was repeated with variable size and luminance changes while different unfamiliar facial identities appeared every 1 s (1 Hz). A neural face individuation response, tagged at 1 Hz and its unique harmonics, emerged over the occipito-temporal cortex at 50 ms stimulus duration (25–100 ms across individuals), with an optimal response reached at 170 ms stimulus duration. In a subsequent experiment, identity changes appeared non-periodically within fixed-frequency sequences while the same participants performed an explicit face individuation task. The behavioral face individuation response also emerged at 50 ms presentation time, and behavioral accuracy correlated with individual participants’ neural response amplitude in a weighted middle stimulus duration range (50–125 ms). Moreover, the latency of the neural response peaking between 180 and 200 ms correlated strongly with individuals’ behavioral accuracy in this middle duration range, as measured independently. These observations point to the minimal (50 ms) and optimal (170 ms) stimulus durations for human face individuation and provide novel evidence that inter-individual differences in the magnitude and latency of early, high-level neural responses are predictive of behavioral differences in performance at this function. [less ▲] Detailed reference viewed: 47 (2 UL)![]() Retter, Talia ![]() in Current Biology (2021), 31(3), 122-124 Detailed reference viewed: 94 (2 UL)![]() Retter, Talia ![]() ![]() in Journal of Cognitive Neuroscience (2021) In the approach of frequency tagging, stimuli that are presented periodically generate periodic responses of the brain. Following a transformation into the frequency domain, the brain’s response is often ... [more ▼] In the approach of frequency tagging, stimuli that are presented periodically generate periodic responses of the brain. Following a transformation into the frequency domain, the brain’s response is often evident at the frequency of stimulation, F, and its higher harmonics (2F, 3F, etc.). This approach is increasingly used in neuroscience, as it affords objective measures to characterize brain function. However, whether these specific harmonic frequency responses should be combined for analysis, and if so, how, remains an outstanding issue. In most studies, higher harmonic responses have not been described or were described only individually; in other studies, harmonics have been combined with various approaches, e.g., averaging and root mean squared summation. A rationale for these approaches in the context of frequency-based analysis principles, and understanding of how they relate to the brain’s response amplitudes in the time domain, has been missing. Here, with these elements addressed, the summation of (baseline-corrected) harmonic amplitude is recommended. [less ▲] Detailed reference viewed: 154 (4 UL)![]() Retter, Talia ![]() in Journal of Vision (2020), 20(3(7)), Color constancy involves disambiguating the spectral characteristics of lights and surfaces, for example to distinguish red in white light from white in red light. Solving this problem appears especially ... [more ▼] Color constancy involves disambiguating the spectral characteristics of lights and surfaces, for example to distinguish red in white light from white in red light. Solving this problem appears especially challenging for bluish tints, which may be attributed more often to shading, and this bias may underlie the individual differences in whether people described the widely publicized image of #thedress as blue-black or white-gold. To probe these higher-level color inferences, we examined neural correlates of the blue-bias, using frequency-tagging and high-density electroencephalography to monitor responses to 3-Hz alternations between different color versions of #thedress. Specifically, we compared relative neural responses to the original “blue” dress image alternated with the complementary “yellow” image (formed by inverting the chromatic contrast of each pixel). This image pair produced a large modulation of the electroencephalography amplitude at the alternation frequency, consistent with a perceived contrast difference between the blue and yellow images. Furthermore, decoding topographical differences in the blue-yellow asymmetries over occipitoparietal channels predicted blue-black and white-gold observers with over 80% accuracy. The blue-yellow asymmetry was stronger than for a “red” versus “green” pair matched for the same component differences in L versus M or S versus LM chromatic contrast as the blue-yellow pair and thus cannot be accounted for by asymmetries within either precortical cardinal mechanism. Instead, the results may point to neural correlates of a higher-level perceptual representation of surface colors. [less ▲] Detailed reference viewed: 39 (0 UL)![]() ; Retter, Talia ![]() in European Journal of Neuroscience (2020) To investigate face individuation (FI), a critical brain function in the human species, an oddball fast periodic visual stimulation (FPVS) approach was recently introduced (Liu‐Shuang et al ... [more ▼] To investigate face individuation (FI), a critical brain function in the human species, an oddball fast periodic visual stimulation (FPVS) approach was recently introduced (Liu‐Shuang et al., Neuropsychologia, 2014, 52, 57). In this paradigm, an image of an unfamiliar “base” facial identity is repeated at a rapid rate F (e.g., 6 Hz) and different unfamiliar “oddball” facial identities are inserted every nth item, at a F/n rate (e.g., every 5th item, 1.2 Hz). This stimulation elicits FI responses at F/n and its harmonics (2F/n, 3F/n, etc.), reflecting neural discrimination between oddball versus base facial identities, which is quantified in the frequency domain of the electroencephalogram (EEG). This paradigm, used in 20 published studies, demonstrates substantial advantages for measuring FI in terms of validity, objectivity, reliability, and sensitivity. Human intracerebral recordings suggest that this FI response originates from neural populations in the lateral inferior occipital and fusiform gyri, with a right hemispheric dominance consistent with the localization of brain lesions specifically affecting facial identity recognition (prosopagnosia). Here, we summarize the contributions of the oddball FPVS framework toward understanding FI, including its (a)typical development, with early studies supporting the application of this technique to clinical testing (e.g., autism spectrum disorder). This review also includes an in‐depth analysis of the paradigm's methodology, with guidelines for designing future studies. A large‐scale group analysis compiling data across 130 observers provides insights into the oddball FPVS FI response properties. Overall, we recommend the oddball FPVS paradigm as an alternative approach to behavioral or traditional event‐related potential EEG measures of face individuation. [less ▲] Detailed reference viewed: 43 (2 UL)![]() ; Retter, Talia ![]() in The Cognitive Neurosciences (2020) Detailed reference viewed: 32 (1 UL)![]() Retter, Talia ![]() in NeuroImage (2020) Visual categorization is integral for our interaction with the natural environment. In this process, similar selective responses are produced to a class of variable visual inputs. Whether categorization ... [more ▼] Visual categorization is integral for our interaction with the natural environment. In this process, similar selective responses are produced to a class of variable visual inputs. Whether categorization is supported by partial (graded) or absolute (all-or-none) neural responses in high-level human brain regions is largely unknown. We address this issue with a novel frequency-sweep paradigm probing the evolution of face categorization responses between the minimal and optimal stimulus presentation times. In a first experiment, natural images of variable non-face objects were progressively swept from 120 to 3 Hz (8.33–333 ms duration) in rapid serial visual presentation sequences. Widely variable face exemplars appeared every 1 s, enabling an implicit frequency-tagged face-categorization electroencephalographic (EEG) response at 1 Hz. Face-categorization activity emerged with stimulus durations as brief as 17 ms (17–83 ms across individual participants) but was significant with 33 ms durations at the group level. The face categorization response amplitude increased until 83 ms stimulus duration (12 Hz), implying graded categorization responses. In a second EEG experiment, faces appeared non-periodically throughout such sequences at fixed presentation rates, while participants explicitly categorized faces. A strong correlation between response amplitude and behavioral accuracy across frequency rates suggested that dilution from missed categorizations, rather than a decreased response to each face stimulus, accounted for the graded categorization responses as found in Experiment 1. This was supported by (1) the absence of neural responses to faces that participants failed to categorize explicitly in Experiment 2 and (2) equivalent amplitudes and spatio-temporal signatures of neural responses to behaviorally categorized faces across presentation rates. Overall, these observations provide original evidence that high-level visual categorization of faces, starting at about 100 ms following stimulus onset in the human brain, is variable across observers tested under tight temporal constraints, but occurs in an all-or-none fashion. [less ▲] Detailed reference viewed: 36 (1 UL) |
||