References of "Journal of Vision"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailDissociated face- and word-selective intracerebral responses in the human ventral occipito-temporal cortex
Hagen, Simen; Lochy, Aliette UL; Jacques, Corentin et al

in Journal of Vision (2020, October), 20(11),

The extent to which faces and written words share neural circuitry in the human brain is actively debated. We provide an original contribution to this debate by comparing face-selective and word-selective ... [more ▼]

The extent to which faces and written words share neural circuitry in the human brain is actively debated. We provide an original contribution to this debate by comparing face-selective and word-selective responses in a large group of patients (N=37) implanted with intracerebral electrodes in the ventral occipito-temporal cortex (VOTC). Both face-selective (i.e., significantly different responses to faces vs. nonface objects) and word-selective (i.e., significantly different responses to words vs. pseudofonts) neural activity is isolated through frequency-tagging (Jonas et al., 2016; Lochy et al., 2018, respectively). Critically, this approach allows disentangling category-selective neural responses from general visual responses. Overall, we find that 69.26% of significant contacts show either face- or word-selectivity, with the expected right and left hemispheric dominance, respectively (Fig.1A,B). Moreover, the center of mass for word-contacts is more lateral than for face-contacts, with no differences in postero-anterior axis (Fig.2A). Spatial dissociations are also found within core regions of face and word processing, with a medio-lateral dissociation in the fusiform gyrus (FG) and surrounding sulci (FG+sulci;Fig.2B), while a postero-anterior dissociation is found in the inferior occipital gyrus (IOG;Fig.2C). Despite their spatial dissociations in the FG+sulci and IOG, most overlap in category-selective responses is found in these regions (Fig.1C). Critically, in the overlap-contacts, across the whole brain or specifically in the FG+sulci, between-category (word-face) selective-amplitudes showed no-to-weak correlations, despite strong correlations for within-category (face-face, word-word) selective-amplitudes (Fig.3A), and a strong correlation in non-selective general-amplitudes to words-faces. Moreover, substantial overlap and no-to-weak correlations were observed between faces and a control category (houses) known to be functionally dissociated from faces. Overall, we conclude that category-selectivity for faces and words is largely dissociated in the human VOTC, with a limited spatial overlap likely due to the distant recording of dissociated populations of neurons rather than to shared category-selective representations. [less ▲]

Detailed reference viewed: 48 (3 UL)
Full Text
Peer Reviewed
See detailNeural correlates of perceptual color inferences as revealed by #thedress
Retter, Talia UL; Gwinn, O.S.; O'Neil, S.F. et al

in Journal of Vision (2020), 20(3(7)),

Color constancy involves disambiguating the spectral characteristics of lights and surfaces, for example to distinguish red in white light from white in red light. Solving this problem appears especially ... [more ▼]

Color constancy involves disambiguating the spectral characteristics of lights and surfaces, for example to distinguish red in white light from white in red light. Solving this problem appears especially challenging for bluish tints, which may be attributed more often to shading, and this bias may underlie the individual differences in whether people described the widely publicized image of #thedress as blue-black or white-gold. To probe these higher-level color inferences, we examined neural correlates of the blue-bias, using frequency-tagging and high-density electroencephalography to monitor responses to 3-Hz alternations between different color versions of #thedress. Specifically, we compared relative neural responses to the original “blue” dress image alternated with the complementary “yellow” image (formed by inverting the chromatic contrast of each pixel). This image pair produced a large modulation of the electroencephalography amplitude at the alternation frequency, consistent with a perceived contrast difference between the blue and yellow images. Furthermore, decoding topographical differences in the blue-yellow asymmetries over occipitoparietal channels predicted blue-black and white-gold observers with over 80% accuracy. The blue-yellow asymmetry was stronger than for a “red” versus “green” pair matched for the same component differences in L versus M or S versus LM chromatic contrast as the blue-yellow pair and thus cannot be accounted for by asymmetries within either precortical cardinal mechanism. Instead, the results may point to neural correlates of a higher-level perceptual representation of surface colors. [less ▲]

Detailed reference viewed: 39 (0 UL)
Full Text
Peer Reviewed
See detailFace perception is tuned to horizontal orientation in the N170 time window
Jacques, Corentin; Schiltz, Christine UL; Goffaux, Valerie

in Journal of Vision (2014), 14(2), 1-18

The specificity of face perception is thought to reside both in its dramatic vulnerability to picture-plane inversion and its strong reliance on horizontally oriented image content. Here we asked when in ... [more ▼]

The specificity of face perception is thought to reside both in its dramatic vulnerability to picture-plane inversion and its strong reliance on horizontally oriented image content. Here we asked when in the visual processing stream face-specific perception is tuned to horizontal information. We measured the behavioral performance and scalp event-related potentials (ERP) when participants viewed upright and inverted images of faces and cars (and natural scenes) that were phase- randomized in a narrow orientation band centered either on vertical or horizontal orientation. For faces, the magnitude of the inversion effect (IE) on behavioral discrimination performance was significantly reduced for horizontally randomized compared to vertically or nonrandomized images, confirming the importance of horizontal information for the recruitment of face- specific processing. Inversion affected the processing of nonrandomized and vertically randomized faces early, in the N170 time window. In contrast, the magnitude of the N170 IE was much smaller for horizontally randomized faces. The present research indicates that the early face- specific neural representations are preferentially tuned to horizontal information and offers new perspectives for a description of the visual information feeding face- specific perception. [less ▲]

Detailed reference viewed: 134 (2 UL)
Full Text
Peer Reviewed
See detailThe horizontal tuning of face perception relies on the processing of intermediate and high spatial frequencies
Goffaux, Valerie; van Zon, Jaap; Schiltz, Christine UL

in Journal of Vision (2011), 11(10), 1-9

It was recently shown that expert face perception relies on the extraction of horizontally oriented visual cues. Pictureplane inversion was found to eliminate horizontal, suggesting that this tuning ... [more ▼]

It was recently shown that expert face perception relies on the extraction of horizontally oriented visual cues. Pictureplane inversion was found to eliminate horizontal, suggesting that this tuning contributes to the specificity of face processing. The present experiments sought to determine the spatial frequency (SF) scales supporting the horizontal tuning of face perception. Participants were instructed to match upright and inverted faces that were filtered both in the frequency and orientation domains. Faces in a pair contained horizontal or vertical ranges of information in low, middle, or high SF (LSF, MSF, or HSF). Our findings confirm that upright (but not inverted) face perception is tuned to horizontal orientation. Horizontal tuning was the most robust in the MSF range, next in the HSF range, and absent in the LSF range. Moreover, face inversion selectively disrupted the ability to process horizontal information in MSF and HSF ranges. This finding was replicated even when task difficulty was equated across orientation and SF at upright orientation. Our findings suggest that upright face perception is tuned to horizontally oriented face information carried by intermediate and high SF bands. They further indicate that inversion alters the sampling of face information both in the orientation and SF domains. [less ▲]

Detailed reference viewed: 110 (5 UL)
Full Text
Peer Reviewed
See detailCerebral lateralization of the face-cortical network in left-handers: only the FFA does not get it right
Bukowski, Henryk; Rossion, Bruno; Schiltz, Christine UL et al

in Journal of Vision (2010), 10(7),

Face processing is a function that is highly lateralized in humans, as supported by original evidence from brain lesion studies (Hecaen & Anguerlergues, 1962), followed by studies using divided visual ... [more ▼]

Face processing is a function that is highly lateralized in humans, as supported by original evidence from brain lesion studies (Hecaen & Anguerlergues, 1962), followed by studies using divided visual field presentations (Heller & Levy, 1981), neuroimaging (Sergent et al., 1992) and event-related potentials (Bentin et al., 1996). Studies in non-human primates (Perrett et al., 1988; Zangenehpour & Chaudhuri, 2005), or other mammals (Peirce & Kendrick, 2001) support the right lateralization of the function, which may be related to a dominance of the right hemisphere in global visual processing. However, in humans there is evidence that manual preference may shift or qualify the pattern of lateralization for faces in the visual cortex: face recognition impairments following unilateral left hemisphere brain damage have been found only in a few left-handers (e.g., Mattson et al., 1992; Barton, 2009). Here we measured the pattern of lateralization in the entire cortical face network in right and left-handers (12 subjects in each group) using a well-balanced face-localizer block paradigm in fMRI (faces, cars, and their phase-scrambled versions). While the FFA was strongly right lateralized in right-handers, as described previously, it was equally strong in both hemispheres in left-handers. In contrast, other areas of the face-sensitive network (posterior superior temporal sulcus, pSTS; occipital face area, OFA; anterior infero-temporal cortex, AIT; amygdala) remained identically right lateralized in both left- and right-handers. Accordingly, our results strongly suggest that the face-sensitive network is equally lateralized for left- and right-handers, and thus the face processing is not influenced by handedness. However, the FFA is an important exception since it is right-lateralized for right-handers but its recruitment is more balanced between hemispheres for left-handers. These observations carry important theoretical and clinical implications for the aetiology of brain lateralization depending on the left- or right-handedness and the neuropsychological undertaking of prosopagnosic patients. [less ▲]

Detailed reference viewed: 132 (0 UL)
Full Text
Peer Reviewed
See detailCharacterizing the face processing network in the human brain: a large-scale fMRI localizer study
Dricot, Laurence; Hanseeuw, Bernard; Schiltz, Christine UL et al

in Journal of Vision (2010), 10(7),

A whole network of brain areas showing larger response to faces than other visual stimuli has been identified in the human brain using fMRI (Sergent, 1992; Haxby, 2000). Most studies identify only a ... [more ▼]

A whole network of brain areas showing larger response to faces than other visual stimuli has been identified in the human brain using fMRI (Sergent, 1992; Haxby, 2000). Most studies identify only a subset of this network, by comparing the presentation of face pictures to all kinds of object categories mixed up (e.g., Kanwisher, 1997), or to scrambled faces (e.g., Ishaï, 2005), using different statistical thresholds. Given these differences of approaches, the (sub)cortical face network can be artificially overextended (Downing & Wiggett, 2008), or minimized in different studies, both at the local (size of regions) and global (number of regions) levels. Here we conducted an analysis of a large set of right-handed subjects (40), tested with a new whole-brain localizer to control for both high-level and low-level differences between faces and objects. Pictures of faces, cars and their phase-scrambled counterparts were used in a 2x2 block design. Group-level (random effect) and single subject (ROI) analyses were made. A conjunction of two contrasts (F-SF and F-C) identified 6 regions: FFA, OFA, amygdala, pSTS, AIT and thalamus. All these regions but the amygdala showed clear right lateralization. Interestingly, the FFA showed the least face-selective response among the cortical face network: it presented a significantly larger response to pictures of cars than scrambled cars [t=9.3, much more than amygdala (t=2.6), AIT (t=2.1) and other regions (NS)], and was also sensitive to low-level properties of faces [SF - SO; t=5.1; NS in other areas]. These observations suggest that, contrary to other areas of the network, including the OFA, the FFA is a region that may contain populations of neurons that are specific to faces intermixed with populations responding more generally to object categories. More generally, this study helps understanding the extent and specificity of the network of (sub)cortical areas particularly involved in face processing. [less ▲]

Detailed reference viewed: 104 (0 UL)
Full Text
Peer Reviewed
See detailAttentional shifts due to irrelevant numerical cues: Behavioral investigation of a lateralized target discrimination paradigm
Schiltz, Christine UL; Dormal, Giulia; Martin, Romain UL et al

in Journal of Vision (2010), 10(7),

Behavioural evidence indicates the existence of a link between numerical representations and visuo-spatial processes. A striking demonstration of this link was provided by Fischer and colleagues (2003 ... [more ▼]

Behavioural evidence indicates the existence of a link between numerical representations and visuo-spatial processes. A striking demonstration of this link was provided by Fischer and colleagues (2003), who reported that participants detect a target more rapidly in the left hemifield, if it is preceded by a small number (e.g. 2 or 3) and more rapidly in the right hemifield if preceded by a large number (e.g. 8 or 9). This is strong evidence that numbers orient visuo-spatial attention to different visual hemifields (e.g., left and right) depending on their magnitude (e.g., small and large, respectively). Here, we sought to replicate number-related attentional shifts using a discrimination task. The participants (n=16) were presented 1 digit (1,2 vs. 8,9) at the centre of the screen for 400ms. After 500ms, 1000ms or 2000ms, a target was briefly flashed in either the right or left hemifield and participants had to report its colour (red or green). They were told that the central digit was irrelevant to the task. We hypothesized that the attentional shift induced by the centrally presented numbers should induce congruency effects for the target discrimination task, so that small (or large) numbers would facilitate the processing of left (or right) targets. Our results confirmed this prediction, but only for the shortest digit-target interval (500ms). This is supported by a significant interaction between number magnitude (small/large) and target hemifield (left/right). The link between numerical and spatial representations further predicts a positive relation between number magnitude and the difference in RT between left and right targets. Regression slopes were computed individually and a positive slope was obtained for short number-target interval. These findings indicate that the attentional shifts induced by irrelevant numerical material are independent of the exact nature of target processing (discrimination vs. detection). [less ▲]

Detailed reference viewed: 133 (2 UL)
Full Text
Peer Reviewed
See detailHolistic perception of individual faces in the right middle fusiform gyrus as evidenced by the composite face illusion
Schiltz, Christine UL; Dricot, Laurence; Goebel, Rainer et al

in Journal of Vision (2010), 10(2), 1-16

The perception of a facial feature (e.g., the eyes) is influenced by the position and identity of other features (e.g., the mouth) supporting an integrated, or holistic, representation of individual faces ... [more ▼]

The perception of a facial feature (e.g., the eyes) is influenced by the position and identity of other features (e.g., the mouth) supporting an integrated, or holistic, representation of individual faces in the human brain. Here we used an event-related adaptation paradigm in functional magnetic resonance imaging (fMRI) to clarify the regions representing faces holistically across the whole brain. In each trial, observers performed the same/different task on top halves (aligned or misaligned) of two faces presented sequentially. For each face pair, the identity of top and bottom parts could be both identical, both different, or different only for the bottom half. The latter manipulation resulted in a composite face illusion, i.e., the erroneous perception of identical top parts as being different, only for aligned faces. Release from adaptation in this condition was found in two sub-areas of the right middle fusiform gyrus responding preferentially to faces, including the “fusiform face area” (“FFA”). There were no significant effects in homologous regions of the left hemisphere or in the inferior occipital cortex. Altogether, these observations indicate that face-sensitive populations of neurons in the right middle fusiform gyrus are optimally tuned to represent individual exemplars of faces holistically. [less ▲]

Detailed reference viewed: 128 (5 UL)
Peer Reviewed
See detailTemporal order judgment and simple reaction times: evidence for a common processing system.
Cardoso-Leite, Pedro UL; Gorea, Andrei; Mamassian, Pascal

in Journal of vision (2007), 7(6), 11

We present a simple reaction time (RT) versus temporal order judgment (TOJ) experiment as a test of the perception-action relationship. The experiment improves on previous ones in that it assesses for the ... [more ▼]

We present a simple reaction time (RT) versus temporal order judgment (TOJ) experiment as a test of the perception-action relationship. The experiment improves on previous ones in that it assesses for the first time RT and TOJ on a trial-by-trial basis, hence allowing the study of the two behaviors within the same task context and, most importantly, the association of RT to "correct" and "incorrect" TOJs. RTs to pairs of stimuli are significantly different depending on the associated TOJs, an indication that perceptual and motor decisions are based on the same internal response. Simulations with the simplest one-system model (J. Gibbon & R. Rutschmann, 1969) using the means and standard deviations of the RT to stimuli presented in isolation yield excellent fits of the mean RT to these increments when presented in sequence and moderately good fits of the RT when classified according to the TOJ categories. The present observation that the point of subjective simultaneity for stimulus pairs is systematically smaller than the difference in RT to each of the two increments in the same pairs pleads, however, in favor of distinct decision criteria for perception and action with the former below the latter. For such a case, standard one-system race models require that the internal noise associated with the TOJ be less than the one associated with the RT to the same stimulus pair. The present data show the reverse state of affairs. In short, data and simulations comply with "one-system-two-decision" models of perceptual and motor behaviors, while prompting further testing and modeling to account for the apparent discrepancy between the ordering of the two decisions. [less ▲]

Detailed reference viewed: 27 (2 UL)