Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 1 showing 1 ~ 20 papers out of 415 papers

Dissociation between overt and unconscious face processing in fusiform face area.

  • Christoph Lehmann‎ et al.
  • NeuroImage‎
  • 2004‎

The precise role of the fusiform face area (FFA) in face processing remains controversial. In this study, we investigated to what degree FFA activation reflects additional functions beyond face perception. Seven volunteers underwent rapid event-related functional magnetic resonance imaging while they performed a face-encoding and a face-recognition task. During face encoding, activity in the FFA for individual faces predicted whether the individual face was subsequently remembered or forgotten. However, during face recognition, no difference in FFA activity between consciously remembered and forgotten faces was observed, but the activity of FFA differentiated if a face had been seen previously or not. This demonstrated a dissociation between overt recognition and unconscious discrimination of stimuli, suggesting that physiological processes of face recognition can take place, even if not all of its operations are made available to consciousness.


Resting-state fMRI reveals functional connectivity between face-selective perirhinal cortex and the fusiform face area related to face inversion.

  • Edward B O'Neil‎ et al.
  • NeuroImage‎
  • 2014‎

Studies examining the neural correlates of face perception and recognition in humans have revealed multiple brain regions that appear to play a specialized role in face processing. These include an anterior portion of perirhinal cortex (PrC) that appears to be homologous to the face-selective 'anterior face patch' recently reported in non-human primates. Electrical stimulation studies in the macaque indicate that the anterior face patch is strongly connected with other face-selective patches of cortex, even in the absence of face stimuli. The intrinsic functional connectivity of face-selective PrC and other regions of the face-processing network in humans are currently less well understood. Here, we examined resting-state fMRI connectivity across five face-selective regions in the right hemisphere that were identified with separate functional localizer scans: the PrC, amygdala (Amg), superior temporal sulcus, fusiform face area (FFA), and occipital face area. A partial correlation technique, controlling for fluctuations in occipitotemporal cortex that were not face specific, revealed connectivity between the PrC and the FFA, as well as the Amg. When examining the 'unique' connectivity of PrC within this face processing network, we found that the connectivity between the PrC and the FFA as well as that between the PrC and the Amg persisted even after controlling for potential mediating effects of other face-selective regions. Lastly, we examined the behavioral relevance of PrC connectivity by examining inter-individual differences in resting-state fluctuations in relation to differences in behavioral performance for a forced-choice recognition memory task that involved judgments on upright and inverted faces. This analysis revealed a significant correlation between the increased accuracy for upright faces (i.e., the face inversion effect) and the strength of connectivity between the PrC and the FFA. Together, these findings point to a high degree of functional integration of face-selective aspects of PrC in the face processing network with notable behavioral relevance.


Temporal frequency tuning of cortical face-sensitive areas for individual face perception.

  • Francesco Gentile‎ et al.
  • NeuroImage‎
  • 2014‎

In a highly dynamic visual environment the human brain needs to rapidly differentiate complex visual patterns, such as faces. Here, we defined the temporal frequency tuning of cortical face-sensitive areas for face discrimination. Six observers were tested with functional magnetic resonance imaging (fMRI) when the same or different faces were presented in blocks at 11 frequency rates (ranging from 1 to 12 Hz). We observed a larger fMRI response for different than same faces - the repetition suppression/adaptation effect - across all stimulation frequency rates. Most importantly, the magnitude of the repetition suppression effect showed a typical Gaussian-shaped tuning function, peaking on average at 6 Hz for all face-sensitive areas of the ventral occipito-temporal cortex, including the fusiform and occipital "face areas" (FFA and OFA), as well as the superior temporal sulcus. This effect was due both to a maximal response to different faces in a range of 3 to 6 Hz and to a sharp drop of the blood oxygen level dependent (BOLD) signal from 6 Hz onward when the same face was repeated during a block. These observations complement recent scalp EEG observations (Alonso-Prieto et al., 2013), indicating that the cortical face network can discriminate each individual face when these successive faces are presented every 160-170 ms. They also suggest that a relatively fast 6 Hz rate may be needed to isolate the contribution of high-level face perception processes during behavioral discrimination tasks. Finally, these findings carry important practical implications, allowing investigators to optimize the stimulation frequency rates for observing the largest repetition suppression effects to faces and other visual forms in the occipito-temporal cortex.


Investigating holistic face processing within and outside of face-responsive brain regions.

  • Celia Foster‎ et al.
  • NeuroImage‎
  • 2021‎

It has been shown that human faces are processed holistically (i.e. as indecomposable wholes, rather than by their component parts) and this holistic face processing is linked to brain activity in face-responsive brain regions. Although several brain regions outside of the face-responsive network are also sensitive to relational processing and perceptual grouping, whether these non-face-responsive regions contribute to holistic processing remains unclear. Here, we investigated holistic face processing in the composite face paradigm both within and outside of face-responsive brain regions. We recorded participants' brain activity using fMRI while they performed a composite face task. Behavioural results indicate that participants tend to judge the same top face halves as different when they are aligned with different bottom face halves but not when they are misaligned, demonstrating a composite face effect. Neuroimaging results revealed significant differences in responses to aligned and misaligned faces in the lateral occipital complex (LOC), and trends in the anterior part of the fusiform face area (FFA2) and transverse occipital sulcus (TOS), suggesting that these regions are sensitive to holistic versus part-based face processing. Furthermore, the retrosplenial cortex (RSC) and the parahippocampal place area (PPA) showed a pattern of neural activity consistent with a holistic representation of face identity, which also correlated with the strength of the behavioural composite face effect. These results suggest that neural activity in brain regions both within and outside of the face-responsive network contributes to the composite-face effect.


Crowdsourcing neuroscience: Inter-brain coupling during face-to-face interactions outside the laboratory.

  • Suzanne Dikker‎ et al.
  • NeuroImage‎
  • 2021‎

When we feel connected or engaged during social behavior, are our brains in fact "in sync" in a formal, quantifiable sense? Most studies addressing this question use highly controlled tasks with homogenous subject pools. In an effort to take a more naturalistic approach, we collaborated with art institutions to crowdsource neuroscience data: Over the course of 5 years, we collected electroencephalogram (EEG) data from thousands of museum and festival visitors who volunteered to engage in a 10-min face-to-face interaction. Pairs of participants with various levels of familiarity sat inside the Mutual Wave Machine-an artistic neurofeedback installation that translates real-time correlations of each pair's EEG activity into light patterns. Because such inter-participant EEG correlations are prone to noise contamination, in subsequent offline analyses we computed inter-brain coupling using Imaginary Coherence and Projected Power Correlations, two synchrony metrics that are largely immune to instantaneous, noise-driven correlations. When applying these methods to two subsets of recorded data with the most consistent protocols, we found that pairs' trait empathy, social closeness, engagement, and social behavior (joint action and eye contact) consistently predicted the extent to which their brain activity became synchronized, most prominently in low alpha (~7-10 Hz) and beta (~20-22 Hz) oscillations. These findings support an account where shared engagement and joint action drive coupled neural activity and behavior during dynamic, naturalistic social interactions. To our knowledge, this work constitutes a first demonstration that an interdisciplinary, real-world, crowdsourcing neuroscience approach may provide a promising method to collect large, rich datasets pertaining to real-life face-to-face interactions. Additionally, it is a demonstration of how the general public can participate and engage in the scientific process outside of the laboratory. Institutions such as museums, galleries, or any other organization where the public actively engages out of self-motivation, can help facilitate this type of citizen science research, and support the collection of large datasets under scientifically controlled experimental conditions. To further enhance the public interest for the out-of-the-lab experimental approach, the data and results of this study are disseminated through a website tailored to the general public (wp.nyu.edu/mutualwavemachine).


Distinct neural-behavioral correspondence within face processing and attention networks for the composite face effect.

  • Changming Chen‎ et al.
  • NeuroImage‎
  • 2022‎

The composite face effect (CFE) is recognized as a hallmark for holistic face processing, but our knowledge remains sparse about its cognitive and neural loci. Using functional magnetic resonance imaging with independent localizer and complete composite face task, we here investigated its neural-behavioral correspondence within face processing and attention networks. Complementing classical comparisons, we adopted a dimensional reduction approach to explore the core cognitive constructs of the behavioral CFE measurement. Our univariate analyses found an alignment effect in regions associated with both the extended face processing network and attention networks. Further representational similarity analyses based on Euclidian distances among all experimental conditions were used to identify cortical regions with reliable neural-behavioral correspondences. Multidimensional scaling and hierarchical clustering analyses for neural-behavioral correspondence data revealed two principal components underlying the behavioral CFE effect, which fit best to the neural responses in the bilateral insula and medial frontal gyrus. These findings highlight the distinct neurocognitive contributions of both face processing and attentional networks to the behavioral CFE outcome, which bridge the gaps between face recognition and attentional control models.


Face space representations of movement.

  • Nicholas Furl‎ et al.
  • NeuroImage‎
  • 2020‎

The challenging computational problem of perceiving dynamic faces "in the wild" goes unresolved because most research focuses on easier questions about static photograph perception. This literature conceptualizes face representation as a dissimilarity-based "face space", with axes that describe the dimensions of static images. Some versions express positions in face space relative to a central tendency (norm). Are facial movements represented like this? We tested for representations that accord with an a priori hypothesized motion-based face space by experimentally manipulating faces' motion-based dissimilarity. Because we caricatured movements, we could test for representations of dissimilarity from a motion-based norm. Behaviorally, participants perceived these caricatured expressions as convincing and recognizable. Moreover, as expected, caricature enhanced perceived dissimilarity between facial expressions. Functional magnetic resonance imaging showed that occipitotemporal brain responses, including face-selective and motion-sensitive areas, reflect this face space. This evidence converged across methods including analysis of univariate mean responses (which additionally exhibited norm-based responses), repetition suppression and representational similarity analysis. This accumulated evidence for "representational geometry" shows how perception and visual brain responses to facial dynamics reflect representations of movement-based dissimilarity spaces, including explicit computation of distance from a norm movement.


Interaction between the electrical stimulation of a face-selective area and the perception of face stimuli.

  • Sang Chul Chong‎ et al.
  • NeuroImage‎
  • 2013‎

We electrically stimulated the face-selective area in epileptic patients while they were performing a face-categorization task. Face categorization was interfered by electrical stimulation but was restored by increasing the visual signal. More importantly, face-categorization interference by electrical stimulation was confined to face-selective electrodes, and the amount of interference was positively correlated with the sensitivity of the face-selective electrodes. These results strongly support the hypothesis that the face-selective area has a direct causal link to face perception.


Electrophysiological correlates of masked face priming.

  • R N Henson‎ et al.
  • NeuroImage‎
  • 2008‎

Using a sandwich-masked priming paradigm with faces, we report two ERP effects that appear to reflect different levels of subliminal face processing. These two ERP repetition effects dissociate in their onset, scalp topography, and sensitivity to face familiarity. The "early" effect occurred between 100 and 150 ms, was maximally negative-going over lateral temporoparietal channels, and was found for both familiar and unfamiliar faces. The "late" effect occurred between 300 and 500 ms, was maximally positive-going over centroparietal channels, and was found only for familiar faces. The early effect resembled our previous fMRI data from the same paradigm; the late effect resembled the behavioural priming found, in the form of faster reaction times to make fame judgments about primed relative to unprimed familiar faces. None of the ERP or behavioural effects appeared explicable by a measure of participants' ability to see the primes. The ERP and behavioural effects showed some sensitivity to whether the same or a different photograph of a face was repeated, but could remain reliable across different photographs, and did not appear attributable to a low-level measure of pixelwise overlap between prime and probe photograph. The functional significance of these ERP effects is discussed in relation to unconscious perception and face processing.


The 'Narcissus Effect': Top-down alpha-beta band modulation of face-related brain areas during self-face processing.

  • Elisabet Alzueta‎ et al.
  • NeuroImage‎
  • 2020‎

Self-related information, such as one's own face, is prioritized by our cognitive system. Whilst recent theoretical developments suggest that this is achieved by an interplay between bottom-up and top-down attentional mechanisms, their underlying neural dynamics are still poorly understood. Furthermore, it is still matter of discussion as to whether these attentional mechanisms are truly self-specific or instead driven by face familiarity. To address these questions, we used EEG to record the brain activity of twenty-five healthy participants whilst identifying their own face, a friend's face and a stranger's face. Time-frequency analysis revealed a greater sustained power decrease in the alpha and beta frequency bands for the self-face, which emerged at late latencies and was maintained even when the face was no longer present. Critically, source analysis showed that this activity was generated in key brain regions for self-face recognition, such as the fusiform gyrus. As in the Myth of Narcissus, our results indicate that one's own face might have the potential to hijack attention. We suggest that this effect is specific to the self and driven by a top-down attentional control mechanism, which might facilitate further processing of personally relevant events.


Spatio-temporal dynamics of face perception.

  • I Muukkonen‎ et al.
  • NeuroImage‎
  • 2020‎

The temporal and spatial neural processing of faces has been investigated rigorously, but few studies have unified these dimensions to reveal the spatio-temporal dynamics postulated by the models of face processing. We used support vector machine decoding and representational similarity analysis to combine information from different locations (fMRI), time windows (EEG), and theoretical models. By correlating representational dissimilarity matrices (RDMs) derived from multiple pairwise classifications of neural responses to different facial expressions (neutral, happy, fearful, angry), we found early EEG time windows (starting around 130 ​ms) to match fMRI data from primary visual cortex (V1), and later time windows (starting around 190 ​ms) to match data from lateral occipital, fusiform face complex, and temporal-parietal-occipital junction (TPOJ). According to model comparisons, the EEG classification results were based more on low-level visual features than expression intensities or categories. In fMRI, the model comparisons revealed change along the processing hierarchy, from low-level visual feature coding in V1 to coding of intensity of expressions in the right TPOJ. The results highlight the importance of a multimodal approach for understanding the functional roles of different brain regions in face processing.


Early visual exposure primes future cross-modal specialization of the fusiform face area in tactile face processing in the blind.

  • Rui Dai‎ et al.
  • NeuroImage‎
  • 2022‎

The fusiform face area (FFA) is a core cortical region for face information processing. Evidence suggests that its sensitivity to faces is largely innate and tuned by visual experience. However, how experience in different time windows shape the plasticity of the FFA remains unclear. In this study, we investigated the role of visual experience at different time points of an individual's early development in the cross-modal face specialization of the FFA. Participants (n = 74) were classified into five groups: congenital blind, early blind, late blind, low vision, and sighted control. Functional magnetic resonance imaging data were acquired when the participants haptically processed carved faces and other objects. Our results showed a robust and highly consistent face-selective activation in the FFA region in the early blind participants, invariant to size and level of abstraction of the face stimuli. The cross-modal face activation in the FFA was much less consistent in other groups. These results suggest that early visual experience primes cross-modal specialization of the FFA, and even after the absence of visual experience for more than 14 years in early blind participants, their FFA can engage in cross-modal processing of face information.


A face is more than just the eyes, nose, and mouth: fMRI evidence that face-selective cortex represents external features.

  • Frederik S Kamps‎ et al.
  • NeuroImage‎
  • 2019‎

What is a face? Intuition, along with abundant behavioral and neural evidence, indicates that internal features (e.g., eyes, nose, mouth) are critical for face recognition, yet some behavioral and neural findings suggest that external features (e.g., hair, head outline, neck and shoulders) may likewise be processed as a face. Here we directly test this hypothesis by investigating how external (and internal) features are represented in the brain. Using fMRI, we found highly selective responses to external features (relative to objects and scenes) within the face processing system in particular, rivaling that observed for internal features. We then further asked how external and internal features are represented in regions of the cortical face processing system, and found a similar division of labor for both kinds of features, with the occipital face area and posterior superior temporal sulcus representing the parts of both internal and external features, and the fusiform face area representing the coherent arrangement of both internal and external features. Taken together, these results provide strong neural evidence that a "face" is composed of both internal and external features.


Modulation of the face- and body-selective visual regions by the motion and emotion of point-light face and body stimuli.

  • Anthony P Atkinson‎ et al.
  • NeuroImage‎
  • 2012‎

Neural regions selective for facial or bodily form also respond to facial or bodily motion in highly form-degraded point-light displays. Yet it is unknown whether these face-selective and body-selective regions are sensitive to human motion regardless of stimulus type (faces and bodies) or to the specific motion-related cues characteristic of their proprietary stimulus categories. Using fMRI, we show that facial and bodily motions activate selectively those populations of neurons that code for the static structure of faces and bodies. Bodily (vs. facial) motion activated body-selective EBA bilaterally and right but not left FBA, irrespective of whether observers judged the emotion or color-change in point-light angry, happy and neutral stimuli. Facial (vs. bodily) motion activated face-selective right and left FFA, but only during emotion judgments for right FFA. Moreover, the strength of responses to point-light bodies vs. faces positively correlated with voxelwise selectivity for static bodies but not faces, whereas the strength of responses to point-light faces positively correlated with voxelwise selectivity for static faces but not bodies. Emotional content carried by point-light form-from-motion cues was sufficient to enhance the activity of several regions, including bilateral EBA and right FFA and FBA. However, although the strength of emotional modulation in right and left EBA by point-light body movements was related to the degree of voxelwise selectivity to static bodies but not static faces, there was no evidence that emotional modulation in fusiform cortex occurred in a similarly stimulus category-selective manner. This latter finding strongly constrains the claim that emotionally expressive movements modulate precisely those neuronal populations that code for the viewed stimulus category.


Covert face recognition without the fusiform-temporal pathways.

  • Mitchell Valdés-Sosa‎ et al.
  • NeuroImage‎
  • 2011‎

Patients with prosopagnosia are unable to recognize faces consciously, but when tested indirectly they can reveal residual identification abilities. The neural circuitry underlying this covert recognition is still unknown. One candidate for this function is the partial survival of a pathway linking the fusiform face area (FFA) and anterior-inferior temporal (AIT) cortex, which has been shown to be essential for conscious face identification. Here we performed functional magnetic, and diffusion tensor imaging in FE, a patient with severe prosopagnosia, with the goal of identifying the neural substrates of his robust covert face recognition. FE presented massive bilateral lesions in the fusiform gyri that eliminated both FFAs, and also disrupted the fibers within the inferior longitudinal fasciculi that link the visual areas with the AITs and medial temporal lobes. Therefore participation of the fusiform-temporal pathway in his covert recognition was precluded. However, face-selective activations were found bilaterally in his occipital gyri and in his extended face system (posterior cingulate and orbitofrontal areas), the latter with larger responses for previously-known faces than for faces of strangers. In the right hemisphere, these surviving face selective-areas were connected via a partially persevered inferior fronto-occipital fasciculus. This suggests an alternative occipito-frontal pathway, absent from current models of face processing, that could explain the patient's covert recognition while also playing a role in unconscious processing during normal cognition.


Neural architecture underlying classification of face perception paradigms.

  • Angela R Laird‎ et al.
  • NeuroImage‎
  • 2015‎

We present a novel strategy for deriving a classification system of functional neuroimaging paradigms that relies on hierarchical clustering of experiments archived in the BrainMap database. The goal of our proof-of-concept application was to examine the underlying neural architecture of the face perception literature from a meta-analytic perspective, as these studies include a wide range of tasks. Task-based results exhibiting similar activation patterns were grouped as similar, while tasks activating different brain networks were classified as functionally distinct. We identified four sub-classes of face tasks: (1) Visuospatial Attention and Visuomotor Coordination to Faces, (2) Perception and Recognition of Faces, (3) Social Processing and Episodic Recall of Faces, and (4) Face Naming and Lexical Retrieval. Interpretation of these sub-classes supports an extension of a well-known model of face perception to include a core system for visual analysis and extended systems for personal information, emotion, and salience processing. Overall, these results demonstrate that a large-scale data mining approach can inform the evolution of theoretical cognitive models by probing the range of behavioral manipulations across experimental tasks.


Human face preference in gamma-frequency EEG activity.

  • Elana Zion-Golumbic‎ et al.
  • NeuroImage‎
  • 2008‎

Previous studies demonstrated that induced EEG activity in the gamma band (iGBA) plays an important role in object recognition and is modulated by stimulus familiarity and its compatibility with pre-existent representations. In the present study we investigated the modulation of iGBA by the degree of familiarity and perceptual expertise that observers have with stimuli from different categories. Specifically, we compared iGBA in response to human faces versus stimuli which subjects are not expert with (ape faces, human hands, buildings and watches). iGBA elicited by human faces was higher and peaked earlier than that elicited by all other categories, which did not differ significantly from each other. These findings can be accounted for by two characteristics of perceptual expertise. One is the activation of a richer, stronger and, therefore, more easily accessible mental representation of human faces. The second is the more detailed perceptual processing necessary for within-category distinctions, which is the hallmark of perceptual expertise. In addition, the sensitivity of iGBA to human but not ape faces was contrasted with the face-sensitive N170-effect, which was similar for human and ape faces. In concert with previous studies, this dissociation suggests a multi-level neuronal model of face recognition, manifested by these two electrophysiological measures, discussed in this paper.


Intracranial markers of conscious face perception in humans.

  • Fabiano Baroni‎ et al.
  • NeuroImage‎
  • 2017‎

Investigations of the neural basis of consciousness have greatly benefited from protocols that involve the presentation of stimuli at perceptual threshold, enabling the assessment of the patterns of brain activity that correlate with conscious perception, independently of any changes in sensory input. However, the comparison between perceived and unperceived trials would be expected to reveal not only the core neural substrate of a particular conscious perception, but also aspects of brain activity that facilitate, hinder or tend to follow conscious perception. We take a step towards the resolution of these confounds by combining an analysis of neural responses observed during the presentation of faces partially masked by Continuous Flash Suppression, and those responses observed during the unmasked presentation of faces and other images in the same subjects. We employed multidimensional classifiers to decode physical properties of stimuli or perceptual states from spectrotemporal representations of electrocorticographic signals (1071 channels in 5 subjects). Neural activity in certain face responsive areas located in both the fusiform gyrus and in the lateral-temporal/inferior-parietal cortex discriminated seen vs. unseen faces in the masked paradigm and upright faces vs. other categories in the unmasked paradigm. However, only the former discriminated upright vs. inverted faces in the unmasked paradigm. Our results suggest a prominent role for the fusiform gyrus in the configural perception of faces, and possibly other objects that are holistically processed. More generally, we advocate comparative analysis of neural recordings obtained during different, but related, experimental protocols as a promising direction towards elucidating the functional specificities of the patterns of neural activation that accompany our conscious experiences.


Species sensitivity of early face and eye processing.

  • Roxane J Itier‎ et al.
  • NeuroImage‎
  • 2011‎

Humans are better at recognizing human faces than faces of other species. However, it is unclear whether this species sensitivity can be seen at early perceptual stages of face processing and whether it involves species sensitivity for important facial features like the eyes. These questions were addressed by comparing the modulations of the N170 ERP component to faces, eyes and eyeless faces of humans, apes, cats and dogs, presented upright and inverted. Although all faces and isolated eyes yielded larger responses than the control object category (houses), the N170 was shorter and smaller to human than animal faces and larger to human than animal eyes. Most importantly, while the classic inversion effect was found for human faces, animal faces yielded no inversion effect or an opposite inversion effect, as seen for objects, suggesting a different neural process involved for humans faces compared to faces of other species. Thus, in addition to its general face and eye categorical sensitivity, the N170 appears particularly sensitive to the human species for both faces and eyes. The results are discussed in the context of a recent model of the N170 response involving face and eye sensitive neurons (Itier et al., 2007) where the eyes play a central role in face perception. The data support the intuitive idea that eyes are what make animal head fronts look face-like and that proficiency for the human species involves visual expertise for the human eyes.


EEG evidence of face-specific visual self-representation.

  • Makoto Miyakoshi‎ et al.
  • NeuroImage‎
  • 2010‎

Cognitive science has regarded an individual's face as a form of representative stimuli to engage self-representation. The domain-generality of self-representation has been assumed in several reports, but was recently refuted in a functional magnetic resonance imaging study (Sugiura et al., 2008). The general validity of this study's criticism should be tested by other measures to compensate for the limitation of the time resolution of the blood-oxygen-level-dependent (BOLD) signal. In this article, we report an EEG study on the domain-generality of visual self-representation. Domain-general self-representation was operationally defined as the self-relevance common to one's own Face and Cup; three levels of familiarity, Self, Familiar, and Unfamiliar, were prepared for each. There was another condition, Visual Field, that manipulated visual hemifield during stimulus presentation, but it was collapsed because it produced no interaction with stimulus familiarity. Our results confirmed comparable phase resetting in both domains in response to familiarity manipulation, which occurred within the medial frontal area during 270-390 ms poststimulus and in the theta band. However, self-specific dissociation was observed only for Face. The results here support the conclusion that visual self-representation is domain-specific and that the oscillatory responses observed suggest evidence of face-specific visual self-representation. Results also revealed an inter-trial phase coherency decrease specifically for Self-Face within the right fusiform area during 170-290 ms poststimulus and in the alpha and theta band, suggesting reduced functional demand for Self-Face represented by sharpened networks.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: