2024MAY02: Our hosting provider has resolved some DB connectivity issues. We may experience some more outages as the issue is resolved. We apologize for the inconvenience. Dismiss and don't show again

Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 1 showing 1 ~ 12 papers out of 12 papers

Multifractal Functional Connectivity Analysis of Electroencephalogram Reveals Reorganization of Brain Networks in a Visual Pattern Recognition Paradigm.

  • Orestis Stylianou‎ et al.
  • Frontiers in human neuroscience‎
  • 2021‎

The human brain consists of anatomically distant neuronal assemblies that are interconnected via a myriad of synapses. This anatomical network provides the neurophysiological wiring framework for functional connectivity (FC), which is essential for higher-order brain functions. While several studies have explored the scale-specific FC, the scale-free (i.e., multifractal) aspect of brain connectivity remains largely neglected. Here we examined the brain reorganization during a visual pattern recognition paradigm, using bivariate focus-based multifractal (BFMF) analysis. For this study, 58 young, healthy volunteers were recruited. Before the task, 3-3 min of resting EEG was recorded in eyes-closed (EC) and eyes-open (EO) states, respectively. The subsequent part of the measurement protocol consisted of 30 visual pattern recognition trials of 3 difficulty levels graded as Easy, Medium, and Hard. Multifractal FC was estimated with BFMF analysis of preprocessed EEG signals yielding two generalized Hurst exponent-based multifractal connectivity endpoint parameters, H(2) and ΔH 15; with the former indicating the long-term cross-correlation between two brain regions, while the latter captures the degree of multifractality of their functional coupling. Accordingly, H(2) and ΔH 15 networks were constructed for every participant and state, and they were characterized by their weighted local and global node degrees. Then, we investigated the between- and within-state variability of multifractal FC, as well as the relationship between global node degree and task performance captured in average success rate and reaction time. Multifractal FC increased when visual pattern recognition was administered with no differences regarding difficulty level. The observed regional heterogeneity was greater for ΔH 15 networks compared to H(2) networks. These results show that reorganization of scale-free coupled dynamics takes place during visual pattern recognition independent of difficulty level. Additionally, the observed regional variability illustrates that multifractal FC is region-specific both during rest and task. Our findings indicate that investigating multifractal FC under various conditions - such as mental workload in healthy and potentially in diseased populations - is a promising direction for future research.


Feature-Specific Event-Related Potential Effects to Action- and Sound-Related Verbs during Visual Word Recognition.

  • Margot Popp‎ et al.
  • Frontiers in human neuroscience‎
  • 2016‎

Grounded cognition theories suggest that conceptual representations essentially depend on modality-specific sensory and motor systems. Feature-specific brain activation across different feature types such as action or audition has been intensively investigated in nouns, while feature-specific conceptual category differences in verbs mainly focused on body part specific effects. The present work aimed at assessing whether feature-specific event-related potential (ERP) differences between action and sound concepts, as previously observed in nouns, can also be found within the word class of verbs. In Experiment 1, participants were visually presented with carefully matched sound and action verbs within a lexical decision task, which provides implicit access to word meaning and minimizes strategic access to semantic word features. Experiment 2 tested whether pre-activating the verb concept in a context phase, in which the verb is presented with a related context noun, modulates subsequent feature-specific action vs. sound verb processing within the lexical decision task. In Experiment 1, ERP analyses revealed a differential ERP polarity pattern for action and sound verbs at parietal and central electrodes similar to previous results in nouns. Pre-activation of the meaning of verbs in the preceding context phase in Experiment 2 resulted in a polarity-reversal of feature-specific ERP effects in the lexical decision task compared with Experiment 1. This parallels analogous earlier findings for primed action and sound related nouns. In line with grounded cognitions theories, our ERP study provides evidence for a differential processing of action and sound verbs similar to earlier observation for concrete nouns. Although the localizational value of ERPs must be viewed with caution, our results indicate that the meaning of verbs is linked to different neural circuits depending on conceptual feature relevance.


Progressive Thinning of Visual Motion Area in Lower Limb Amputees.

  • Guangyao Jiang‎ et al.
  • Frontiers in human neuroscience‎
  • 2016‎

Accumulating evidence has indicated that amputation or deafferentation of a limb induces functional or structural reorganization in the visual areas. However, the extent of the visual areas involved after lower limb amputation remains uncertain. In this investigation, we studied 48 adult patients with unilateral lower limb amputation and 48 matched healthy controls using T1-weighted magnetic resonance imaging. Template-based regions of interest analysis was implemented to detect the changes of cortical thickness in the specific visual areas. Compared with normal controls, amputees exhibited significantly lower thickness in the V5/middle temporal (V5/MT+) visual area, as well as a trend of cortical thinning in the V3d. There was no significant difference in the other visual areas between the two groups. In addition, no significant difference of cortical thickness was found between patients with amputation at different levels. Across all amputees, correlation analyses revealed that the cortical thickness of the V5/MT+ was negatively correlated to the time since amputation. In conclusion, our findings indicate that the amputation of unilateral lower limb could induce changes in the motor-related visual cortex and provide an update on the plasticity of the human brain after limb injury.


Multi-voxel pattern analysis (MVPA) reveals abnormal fMRI activity in both the "core" and "extended" face network in congenital prosopagnosia.

  • Davide Rivolta‎ et al.
  • Frontiers in human neuroscience‎
  • 2014‎

The ability to identify faces is mediated by a network of cortical and subcortical brain regions in humans. It is still a matter of debate which regions represent the functional substrate of congenital prosopagnosia (CP), a condition characterized by a lifelong impairment in face recognition, and affecting around 2.5% of the general population. Here, we used functional Magnetic Resonance Imaging (fMRI) to measure neural responses to faces, objects, bodies, and body-parts in a group of seven CPs and ten healthy control participants. Using multi-voxel pattern analysis (MVPA) of the fMRI data we demonstrate that neural activity within the "core" (i.e., occipital face area and fusiform face area) and "extended" (i.e., anterior temporal cortex) face regions in CPs showed reduced discriminability between faces and objects. Reduced differentiation between faces and objects in CP was also seen in the right parahippocampal cortex. In contrast, discriminability between faces and bodies/body-parts and objects and bodies/body-parts across the ventral visual system was typical in CPs. In addition to MVPA analysis, we also ran traditional mass-univariate analysis, which failed to show any group differences in face and object discriminability. In sum, these findings demonstrate (i) face-object representations impairments in CP which encompass both the "core" and "extended" face regions, and (ii) superior power of MVPA in detecting group differences.


Neural Processing of Repeated Search Targets Depends Upon the Stimuli: Real World Stimuli Engage Semantic Processing and Recognition Memory.

  • Trafton Drew‎ et al.
  • Frontiers in human neuroscience‎
  • 2018‎

Recent evidence has suggested that visual working memory (VWM) plays an important role in representing the target prior to initiating a visual search. The more familiar we are with the search target, the more refined the representation of the target (or "target template") becomes. This sharpening of the target template is thought to underlie the reduced response time (RT) and increased accuracy associated with repeatedly searching for the same target. Perhaps target representations transition from limited-capacity VWM to Long-Term Memory (LTM) as targets repeat. In prior work, amplitude of an event-related potential (ERP) component associated with VWM representation decreased with target repetition, broadly supporting this notion. However, previous research has focused on artificial stimuli (Landolt Cs) that are far removed from search targets in the real world. The current study extends this work by directly comparing target representations for artificial stimuli and common object images. We found VWM representation follows the same pattern for real and artificial stimuli. However, the initial selection of the real world objects follows a much different pattern than more typical artificial stimuli. Further, the morphology of nonlateralized waveforms was substantially different for the two stimulus categories. This suggests that the two types of stimuli were processed in fundamentally different ways. We conclude that object type strongly influences how we deploy attentional and mnemonic resources prior to search. Early attentional selection of familiar objects may facilitate additional LTM processes that lead to behavioral benefits not seen with more simplistic stimuli.


Mapping the "What" and "Where" Visual Cortices and Their Atrophy in Alzheimer's Disease: Combined Activation Likelihood Estimation with Voxel-Based Morphometry.

  • Yanjia Deng‎ et al.
  • Frontiers in human neuroscience‎
  • 2016‎

The human cortical regions for processing high-level visual (HLV) functions of different categories remain ambiguous, especially in terms of their conjunctions and specifications. Moreover, the neurobiology of declined HLV functions in patients with Alzheimer's disease (AD) has not been fully investigated. This study provides a functionally sorted overview of HLV cortices for processing "what" and "where" visual perceptions and it investigates their atrophy in AD and MCI patients. Based upon activation likelihood estimation (ALE), brain regions responsible for processing five categories of visual perceptions included in "what" and "where" visions (i.e., object, face, word, motion, and spatial visions) were analyzed, and subsequent contrast analyses were performed to show regions with conjunctive and specific activations for processing these visual functions. Next, based on the resulting ALE maps, the atrophy of HLV cortices in AD and MCI patients was evaluated using voxel-based morphometry. Our ALE results showed brain regions for processing visual perception across the five categories, as well as areas of conjunction and specification. Our comparisons of gray matter (GM) volume demonstrated atrophy of three "where" visual cortices in late MCI group and extensive atrophy of HLV cortices (25 regions in both "what" and "where" visual cortices) in AD group. In addition, the GM volume of atrophied visual cortices in AD and MCI subjects was found to be correlated to the deterioration of overall cognitive status and to the cognitive performances related to memory, execution, and object recognition functions. In summary, these findings may add to our understanding of HLV network organization and of the evolution of visual perceptual dysfunction in AD as the disease progresses.


Independent Low-Rank Matrix Analysis-Based Automatic Artifact Reduction Technique Applied to Three BCI Paradigms.

  • Suguru Kanoga‎ et al.
  • Frontiers in human neuroscience‎
  • 2020‎

Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) can potentially enable people to non-invasively and directly communicate with others using brain activities. Artifacts generated from body activities (e.g., eyeblinks and teeth clenches) often contaminate EEGs and make EEG-based classification/identification hard. Although independent component analysis (ICA) is the gold-standard technique for attenuating the effects of such contamination, the estimated independent components are still mixed with artifactual and neuronal information because ICA relies only on the independence assumption. The same problem occurs when using independent vector analysis (IVA), an extended ICA method. To solve this problem, we designed an independent low-rank matrix analysis (ILRMA)-based automatic artifact reduction technique that clearly models sources from observations under the independence assumption and a low-rank nature in the frequency domain. For automatic artifact reduction, we combined the signal separation technique with an independent component classifier for EEGs named ICLabel. To assess the comparative efficiency of the proposed method, the discriminabilities of artifact-reduced EEGs using ICA, IVA, and ILRMA were determined using an open-access EEG dataset named OpenBMI, which contains EEG data obtained through three BCI paradigms [motor-imagery (MI), event-related potential (ERP), and steady-state visual evoked potential (SSVEP)]. BCI performances were obtained using these three paradigms after applying artifact reduction techniques, and the results suggested that our proposed method has the potential to achieve higher discriminability than ICA and IVA for BCIs. In addition, artifact reduction using the ILRMA approach clearly improved (by over 70%) the averaged BCI performances using artifact-reduced data sufficiently for most needs of the BCI community. The extension of ICA families to supervised separation that leaves the discriminative ability would further improve the usability of BCIs for real-life environments in which artifacts frequently contaminate EEGs.


Building an EEG-fMRI Multi-Modal Brain Graph: A Concurrent EEG-fMRI Study.

  • Qingbao Yu‎ et al.
  • Frontiers in human neuroscience‎
  • 2016‎

The topological architecture of brain connectivity has been well-characterized by graph theory based analysis. However, previous studies have primarily built brain graphs based on a single modality of brain imaging data. Here we develop a framework to construct multi-modal brain graphs using concurrent EEG-fMRI data which are simultaneously collected during eyes open (EO) and eyes closed (EC) resting states. FMRI data are decomposed into independent components with associated time courses by group independent component analysis (ICA). EEG time series are segmented, and then spectral power time courses are computed and averaged within 5 frequency bands (delta; theta; alpha; beta; low gamma). EEG-fMRI brain graphs, with EEG electrodes and fMRI brain components serving as nodes, are built by computing correlations within and between fMRI ICA time courses and EEG spectral power time courses. Dynamic EEG-fMRI graphs are built using a sliding window method, versus static ones treating the entire time course as stationary. In global level, static graph measures and properties of dynamic graph measures are different across frequency bands and are mainly showing higher values in eyes closed than eyes open. Nodal level graph measures of a few brain components are also showing higher values during eyes closed in specific frequency bands. Overall, these findings incorporate fMRI spatial localization and EEG frequency information which could not be obtained by examining only one modality. This work provides a new approach to examine EEG-fMRI associations within a graph theoretic framework with potential application to many topics.


A cross-linguistic evaluation of script-specific effects on fMRI lateralization in late second language readers.

  • Maki S Koyama‎ et al.
  • Frontiers in human neuroscience‎
  • 2014‎

Behavioral and neuroimaging studies have provided evidence that reading is strongly left lateralized, and the degree of this pattern of functional lateralization can be indicative of reading competence. However, it remains unclear whether functional lateralization differs between the first (L1) and second (L2) languages in bilingual L2 readers. This question is particularly important when the particular script, or orthography, learned by the L2 readers is markedly different from their L1 script. In this study, we quantified functional lateralization in brain regions involved in visual word recognition for participants' L1 and L2 scripts, with a particular focus on the effects of L1-L2 script differences in the visual complexity and orthographic depth of the script. Two different groups of late L2 learners participated in an fMRI experiment using a visual one-back matching task: L1 readers of Japanese who learnt to read alphabetic English and L1 readers of English who learnt to read both Japanese syllabic Kana and logographic Kanji. The results showed weaker leftward lateralization in the posterior lateral occipital complex (pLOC) for logographic Kanji compared with syllabic and alphabetic scripts in both L1 and L2 readers of Kanji. When both L1 and L2 scripts were non-logographic, where symbols are mapped onto sounds, functional lateralization did not significantly differ between L1 and L2 scripts in any region, in any group. Our findings indicate that weaker leftward lateralization for logographic reading reflects greater requirement of the right hemisphere for processing visually complex logographic Kanji symbols, irrespective of whether Kanji is the readers' L1 or L2, rather than characterizing additional cognitive efforts of L2 readers. Finally, brain-behavior analysis revealed that functional lateralization for L2 visual word processing predicted L2 reading competency.


Transcranial Magnetic Stimulation to the Occipital Place Area Biases Gaze During Scene Viewing.

  • George L Malcolm‎ et al.
  • Frontiers in human neuroscience‎
  • 2018‎

We can understand viewed scenes and extract task-relevant information within a few hundred milliseconds. This process is generally supported by three cortical regions that show selectivity for scene images: parahippocampal place area (PPA), medial place area (MPA) and occipital place area (OPA). Prior studies have focused on the visual information each region is responsive to, usually within the context of recognition or navigation. Here, we move beyond these tasks to investigate gaze allocation during scene viewing. Eye movements rely on a scene's visual representation to direct saccades, and thus foveal vision. In particular, we focus on the contribution of OPA, which is: (i) located in occipito-parietal cortex, likely feeding information into parts of the dorsal pathway critical for eye movements; and (ii) contains strong retinotopic representations of the contralateral visual field. Participants viewed scene images for 1034 ms while their eye movements were recorded. On half of the trials, a 500 ms train of five transcranial magnetic stimulation (TMS) pulses was applied to the participant's cortex, starting at scene onset. TMS was applied to the right hemisphere over either OPA or the occipital face area (OFA), which also exhibits a contralateral visual field bias but shows selectivity for face stimuli. Participants generally made an overall left-to-right, top-to-bottom pattern of eye movements across all conditions. When TMS was applied to OPA, there was an increased saccade latency for eye movements toward the contralateral relative to the ipsilateral visual field after the final TMS pulse (400 ms). Additionally, TMS to the OPA biased fixation positions away from the contralateral side of the scene compared to the control condition, while the OFA group showed no such effect. There was no effect on horizontal saccade amplitudes. These combined results suggest that OPA might serve to represent local scene information that can then be utilized by visuomotor control networks to guide gaze allocation in natural scenes.


Egocentric Navigation Abilities Predict Episodic Memory Performance.

  • Giorgia Committeri‎ et al.
  • Frontiers in human neuroscience‎
  • 2020‎

The medial temporal lobe supports both navigation and declarative memory. On this basis, a theory of phylogenetic continuity has been proposed according to which episodic and semantic memories have evolved from egocentric (e.g., path integration) and allocentric (e.g., map-based) navigation in the physical world, respectively. Here, we explored the behavioral significance of this neurophysiological model by investigating the relationship between the performance of healthy individuals on a path integration and an episodic memory task. We investigated the path integration performance through a proprioceptive Triangle Completion Task and assessed episodic memory through a picture recognition task. We evaluated the specificity of the association between performance in these two tasks by including in the study design a verbal semantic memory task. We also controlled for the effect of attention and working memory and tested the robustness of the results by including alternative versions of the path integration and semantic memory tasks. We found a significant positive correlation between the performance on the path integration the episodic, but not semantic, memory tasks. This pattern of correlation was not explained by general cognitive abilities and persisted also when considering a visual path integration task and a non-verbal semantic memory task. Importantly, a cross-validation analysis showed that participants' egocentric navigation abilities reliably predicted episodic memory performance. Altogether, our findings support the hypothesis of a phylogenetic continuity between egocentric navigation and episodic memory and pave the way for future research on the potential causal role of egocentric navigation on multiple forms of episodic memory.


Predicting Performances on Processing and Memorizing East Asian Faces from Brain Activities in Face-Selective Regions: A Neurocomputational Approach.

  • Gary C-W Shyi‎ et al.
  • Frontiers in human neuroscience‎
  • 2020‎

For more than two decades, a network of face-selective brain regions has been identified as the core system for face processing, including occipital face area (OFA), fusiform face area (FFA), and posterior region of superior temporal sulcus (pSTS). Moreover, recent studies have suggested that the ventral route of face processing and memory should end at the anterior temporal lobes (i.e., vATLs), which may play an important role bridging face perception and face memory. It is not entirely clear, however, the extent to which neural activities in these face-selective regions can effectively predict behavioral performance on tasks that are frequently used to investigate face processing and face memory test that requires recognition beyond variation in pose and lighting, especially when non-Caucasian East Asian faces are involved. To address these questions, we first identified during a functional scan the core face network by asking participants to perform a one-back task, while viewing either static images or dynamic videos. Dynamic localizers were effective in identifying regions of interest (ROIs) in the core face-processing system. We then correlated the brain activities of core ROIs with performances on face-processing tasks (component, configural, and composite) and face memory test (Taiwanese Face Memory Test, TFMT) and found evidence for limited predictability. We next adopted an multi-voxel pattern analysis (MVPA) approach to further explore the predictability of face-selective brain regions on TFMT performance and found evidence suggesting that a basic visual processing area such as calcarine and an area for structural face processing such as OFA may play an even greater role in memorizing faces. Implications regarding how differences in processing demands between behavioral and neuroimaging tasks and cultural specificity in face-processing and memory strategies among participants may have contributed to the findings reported here are discussed.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: