Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 1 showing 1 ~ 20 papers out of 305 papers

Automatic processing of abstract musical tonality.

  • Inyong Choi‎ et al.
  • Frontiers in human neuroscience‎
  • 2014‎

Music perception builds on expectancy in harmony, melody, and rhythm. Neural responses to the violations of such expectations are observed in event-related potentials (ERPs) measured using electroencephalography. Most previous ERP studies demonstrating sensitivity to musical violations used stimuli that were temporally regular and musically structured, with less-frequent deviant events that differed from a specific expectation in some feature such as pitch, harmony, or rhythm. Here, we asked whether expectancies about Western musical scale are strong enough to elicit ERP deviance components. Specifically, we explored whether pitches inconsistent with an established scale context elicit deviant components even though equally rare pitches that fit into the established context do not, and even when their timing is unpredictable. We used Markov chains to create temporally irregular pseudo-random sequences of notes chosen from one of two diatonic scales. The Markov pitch-transition probabilities resulted in sequences that favored notes within the scale, but that lacked clear melodic, harmonic, or rhythmic structure. At the random positions, the sequence contained probe tones that were either within the established scale or were out of key. Our subjects ignored the note sequences, watching a self-selected silent movie with subtitles. Compared to the in-key probes, the out-of-key probes elicited a significantly larger P2 ERP component. Results show that random note sequences establish expectations of the "first-order" statistical property of musical key, even in listeners not actively monitoring the sequences.


Automatic processing of unattended object features by functional connectivity.

  • Katja M Mayer‎ et al.
  • Frontiers in human neuroscience‎
  • 2013‎

Observers can selectively attend to object features that are relevant for a task. However, unattended task-irrelevant features may still be processed and possibly integrated with the attended features. This study investigated the neural mechanisms for processing both task-relevant (attended) and task-irrelevant (unattended) object features. The Garner paradigm was adapted for functional magnetic resonance imaging (fMRI) to test whether specific brain areas process the conjunction of features or whether multiple interacting areas are involved in this form of feature integration. Observers attended to shape, color, or non-rigid motion of novel objects while unattended features changed from trial to trial (change blocks) or remained constant (no-change blocks) during a given block. This block manipulation allowed us to measure the extent to which unattended features affected neural responses which would reflect the extent to which multiple object features are automatically processed. We did not find Garner interference at the behavioral level. However, we designed the experiment to equate performance across block types so that any fMRI results could not be due solely to differences in task difficulty between change and no-change blocks. Attention to specific features localized several areas known to be involved in object processing. No area showed larger responses on change blocks compared to no-change blocks. However, psychophysiological interaction (PPI) analyses revealed that several functionally-localized areas showed significant positive interactions with areas in occipito-temporal and frontal areas that depended on block type. Overall, these findings suggest that both regional responses and functional connectivity are crucial for processing multi-featured objects.


The Neural Signatures of Processing Semantic End Values in Automatic Number Comparisons.

  • Michal Pinhas‎ et al.
  • Frontiers in human neuroscience‎
  • 2015‎

The brain activity associated with processing numerical end values has received limited research attention. The present study explored the neural correlates associated with processing semantic end values under conditions of automatic number processing. Event-related potentials (ERPs) were recorded while participants performed the numerical Stroop task, in which they were asked to compare the physical size of pairs of numbers, while ignoring their numerical values. The smallest end value in the set, which is a task irrelevant factor, was manipulated between participant groups. We focused on the processing of the lower end values of 0 and 1 because these numbers were found to be automatically tagged as the "smallest." Behavioral results showed that the size congruity effect was modulated by the presence of the smallest end value in the pair. ERP data revealed a spatially extended centro-parieto-occipital P3 that was enhanced for congruent versus incongruent trials. Importantly, over centro-parietal sites, the P3 congruity effect (congruent minus incongruent) was larger for pairs containing the smallest end value than for pairs containing non-smallest values. These differences in the congruency effect were localized to the precuneus. The presence of an end value within the pair also modulated P3 latency. Our results provide the first neural evidence for the encoding of numerical end values. They further demonstrate that the use of end values as anchors is a primary aspect of processing symbolic numerical information.


ERPs Differentially Reflect Automatic and Deliberate Processing of the Functional Manipulability of Objects.

  • Christopher R Madan‎ et al.
  • Frontiers in human neuroscience‎
  • 2016‎

It is known that the functional properties of an object can interact with perceptual, cognitive, and motor processes. Previously we have found that a between-subjects manipulation of judgment instructions resulted in different manipulability-related memory biases in an incidental memory test. To better understand this effect we recorded electroencephalography (EEG) while participants made judgments about images of objects that were either high or low in functional manipulability (e.g., hammer vs. ladder). Using a between-subjects design, participants judged whether they had seen the object recently (Personal Experience), or could manipulate the object using their hand (Functionality). We focused on the P300 and slow-wave event-related potentials (ERPs) as reflections of attentional allocation. In both groups, we observed higher P300 and slow wave amplitudes for high-manipulability objects at electrodes Pz and C3. As P300 is thought to reflect bottom-up attentional processes, this may suggest that the processing of high-manipulability objects recruited more attentional resources. Additionally, the P300 effect was greater in the Functionality group. A more complex pattern was observed at electrode C3 during slow wave: processing the high-manipulability objects in the Functionality instruction evoked a more positive slow wave than in the other three conditions, likely related to motor simulation processes. These data provide neural evidence that effects of manipulability on stimulus processing are further mediated by automatic vs. deliberate motor-related processing.


Automatic Analysis of EEGs Using Big Data and Hybrid Deep Learning Architectures.

  • Meysam Golmohammadi‎ et al.
  • Frontiers in human neuroscience‎
  • 2019‎

Brain monitoring combined with automatic analysis of EEGs provides a clinical decision support tool that can reduce time to diagnosis and assist clinicians in real-time monitoring applications (e.g., neurological intensive care units). Clinicians have indicated that a sensitivity of 95% with specificity below 5% was the minimum requirement for clinical acceptance. In this study, a high-performance automated EEG analysis system based on principles of machine learning and big data is proposed. This hybrid architecture integrates hidden Markov models (HMMs) for sequential decoding of EEG events with deep learning-based post-processing that incorporates temporal and spatial context. These algorithms are trained and evaluated using the Temple University Hospital EEG, which is the largest publicly available corpus of clinical EEG recordings in the world. This system automatically processes EEG records and classifies three patterns of clinical interest in brain activity that might be useful in diagnosing brain disorders: (1) spike and/or sharp waves, (2) generalized periodic epileptiform discharges, (3) periodic lateralized epileptiform discharges. It also classifies three patterns used to model the background EEG activity: (1) eye movement, (2) artifacts, and (3) background. Our approach delivers a sensitivity above 90% while maintaining a specificity below 5%. We also demonstrate that this system delivers a low false alarm rate, which is critical for any spike detection application.


Automatic Detection of Orientation Contrast Occurs at Early but Not Earliest Stages of Visual Cortical Processing in Humans.

  • Yanfen Zhen‎ et al.
  • Frontiers in human neuroscience‎
  • 2018‎

Orientation contrast is formed when some elements orient differently from their surroundings. Although orientation contrast can be processed in the absence of top-down attention, the underlying neural mechanism for this automatic processing in humans is controversial. In particular, whether automatic detection of orientation contrast occurs at the initial feedforward stage in the primary visual cortex (i.e., V1) remains unclear. Here, we used event-related potentials (ERPs) to examine the automatic processing of orientation contrast in humans. In three experiments, participants completed a task at fixation while orientation contrasts were presented in the periphery, either in the upper visual field (UVF) or the lower visual field (LVF). All experiments showed significant positive potentials evoked by orientation contrasts over occipital areas within 100 ms after stimulus onset. These contrast effects occurred 10-20 ms later than the C1 components evoked by identically located abrupt onset stimuli which indexes the initial feedforward activity in V1. Compared with those in the UVF, orientation contrasts in the LVF evoked earlier and stronger activities, probably reflecting a LVF advantage in processing of orientation contrast. Even when orientation contrasts were rendered almost invisible by backward masking (in Experiment 2), the early contrast effect in the LVF was not disrupted. These findings imply that automatic processing of orientation contrast could occur at early visual cortical processing stages, but was slightly later than the initial feedforward processing in human V1; such automatic processing may involve either recurrent processing in V1 or feedforward processing in early extrastriate visual cortex. Highlights -We examined the earliest automatic processing of orientation contrast in humans with ERPs.-Significant orientation contrast effect started within 100 ms in early visual areas.-The earliest orientation contrast effect occurred later than the C1 evoked by abrupt onset stimuli.-The earliest orientation contrast effect was independent of top-down attention and awareness.-Automatic detection of orientation contrast arises slightly after the initial feedforward processing in V1.


Achieving Accurate Automatic Sleep Staging on Manually Pre-processed EEG Data Through Synchronization Feature Extraction and Graph Metrics.

  • Panteleimon Chriskos‎ et al.
  • Frontiers in human neuroscience‎
  • 2018‎

Sleep staging, the process of assigning labels to epochs of sleep, depending on the stage of sleep they belong, is an arduous, time consuming and error prone process as the initial recordings are quite often polluted by noise from different sources. To properly analyze such data and extract clinical knowledge, noise components must be removed or alleviated. In this paper a pre-processing and subsequent sleep staging pipeline for the sleep analysis of electroencephalographic signals is described. Two novel methods of functional connectivity estimation (Synchronization Likelihood/SL and Relative Wavelet Entropy/RWE) are comparatively investigated for automatic sleep staging through manually pre-processed electroencephalographic recordings. A multi-step process that renders signals suitable for further analysis is initially described. Then, two methods that rely on extracting synchronization features from electroencephalographic recordings to achieve computerized sleep staging are proposed, based on bivariate features which provide a functional overview of the brain network, contrary to most proposed methods that rely on extracting univariate time and frequency features. Annotation of sleep epochs is achieved through the presented feature extraction methods by training classifiers, which are in turn able to accurately classify new epochs. Analysis of data from sleep experiments on a randomized, controlled bed-rest study, which was organized by the European Space Agency and was conducted in the "ENVIHAB" facility of the Institute of Aerospace Medicine at the German Aerospace Center (DLR) in Cologne, Germany attains high accuracy rates, over 90% based on ground truth that resulted from manual sleep staging by two experienced sleep experts. Therefore, it can be concluded that the above feature extraction methods are suitable for semi-automatic sleep staging.


Sensory Prediction of Limb Movement Is Critical for Automatic Online Control.

  • Anne-Emmanuelle Priot‎ et al.
  • Frontiers in human neuroscience‎
  • 2020‎

Fast, online control of movement is an essential component of human motor skills, as it allows automatic correction of inaccurate planning. The present study explores the role of two types of concurrent signals in error correction: predicted visual reafferences coming from an internal representation of the hand, and actual visual feedback from the hand. While the role of sensory feedback in these corrections is well-established, much less is known about sensory prediction. The relative contributions of these two types of signals remain a subject of debate, as they are naturally interconnected. We address the issue in a study that compares online correction of an artificially induced, undetected planning error. Two conditions are tested, which only differ with respect to the accuracy of predicted visual reafferences. In the first, "Prism" experiment, a planning error is introduced by prisms that laterally displace the seen hand prior to hand movement onset. The prism-induced conflict between visual and proprioceptive inputs of the hand also generates an erroneous prediction of visual reafferences of the moving hand. In the second, "Jump" experiment, a planning error is introduced by a jump in the target position, during the orienting saccade, prior to hand movement onset. In the latter condition, predicted reafferences of the hand remained intact. In both experiments, after hand movement onset, the hand was either visible or hidden, which enabled us to manipulate the presence (or absence) of visual feedback during movement execution. The Prism experiment highlighted late and reduced correction of the planning error, even when natural visual feedback of the moving hand was available. In the Jump experiment, early and automatic corrections of the planning error were observed, even in the absence of visual feedback from the moving hand. Therefore, when predicted reafferences were accurate (the Jump experiment), visual feedback was processed rapidly and automatically. When they were erroneous (the Prism experiment), the same visual feedback was less efficient, and required voluntary, and late, control. Our study clearly demonstrates that in natural environments, reliable prediction is critical in the preprocessing of visual feedback, for fast and accurate movement.


Hybrid ICA-Regression: Automatic Identification and Removal of Ocular Artifacts from Electroencephalographic Signals.

  • Malik M Naeem Mannan‎ et al.
  • Frontiers in human neuroscience‎
  • 2016‎

Electroencephalography (EEG) is a portable brain-imaging technique with the advantage of high-temporal resolution that can be used to record electrical activity of the brain. However, it is difficult to analyze EEG signals due to the contamination of ocular artifacts, and which potentially results in misleading conclusions. Also, it is a proven fact that the contamination of ocular artifacts cause to reduce the classification accuracy of a brain-computer interface (BCI). It is therefore very important to remove/reduce these artifacts before the analysis of EEG signals for applications like BCI. In this paper, a hybrid framework that combines independent component analysis (ICA), regression and high-order statistics has been proposed to identify and eliminate artifactual activities from EEG data. We used simulated, experimental and standard EEG signals to evaluate and analyze the effectiveness of the proposed method. Results demonstrate that the proposed method can effectively remove ocular artifacts as well as it can preserve the neuronal signals present in EEG data. A comparison with four methods from literature namely ICA, regression analysis, wavelet-ICA (wICA), and regression-ICA (REGICA) confirms the significantly enhanced performance and effectiveness of the proposed method for removal of ocular activities from EEG, in terms of lower mean square error and mean absolute error values and higher mutual information between reconstructed and original EEG.


Automatic Removal of Physiological Artifacts in EEG: The Optimized Fingerprint Method for Sports Science Applications.

  • David B Stone‎ et al.
  • Frontiers in human neuroscience‎
  • 2018‎

Data contamination due to physiological artifacts such as those generated by eyeblinks, eye movements, and muscle activity continues to be a central concern in the acquisition and analysis of electroencephalographic (EEG) data. This issue is further compounded in EEG sports science applications where the presence of artifacts is notoriously difficult to control because behaviors that generate these interferences are often the behaviors under investigation. Therefore, there is a need to develop effective and efficient methods to identify physiological artifacts in EEG recordings during sports applications so that they can be isolated from cerebral activity related to the activities of interest. We have developed an EEG artifact detection model, the Fingerprint Method, which identifies different spatial, temporal, spectral, and statistical features indicative of physiological artifacts and uses these features to automatically classify artifactual independent components in EEG based on a machine leaning approach. Here, we optimized our method using artifact-rich training data and a procedure to determine which features were best suited to identify eyeblinks, eye movements, and muscle artifacts. We then applied our model to an experimental dataset collected during endurance cycling. Results reveal that unique sets of features are suitable for the detection of distinct types of artifacts and that the Optimized Fingerprint Method was able to correctly identify over 90% of the artifactual components with physiological origin present in the experimental data. These results represent a significant advancement in the search for effective means to address artifact contamination in EEG sports science applications.


Female Advantage in Automatic Change Detection of Facial Expressions During a Happy-Neutral Context: An ERP Study.

  • Qi Li‎ et al.
  • Frontiers in human neuroscience‎
  • 2018‎

Sex differences in conscious emotional processing represent a well-known phenomenon. The present event-related potential (ERP) study examined sex differences in the automatic change detection of facial expressions, as indexed by the visual mismatch negativity (vMMN). As paid volunteers, 19 females and 19 males were presented peripherally with a passive emotional oddball sequence in a happy-neutral context and a fearful-neutral context while they performed a visual detection task in the center of the visual field. Both females and males showed comparable accuracy rates and reaction times in the primary detection task. Females relative to males showed a larger P1 for all facial expressions, as well as a more negative N170 and a less positive P2 for deviants vs. standards. During the early stage (100-200 ms), females displayed more negative vMMN responses to both happy and neutral faces than males over the occipito-temporal and fronto-central regions. During the late stage (250-350 ms), females relative to males exhibited more negative vMMN responses to both happy and neutral faces over the fronto-central and right occipito-temporal regions, but only more negative vMMN responses to happy faces over the left occipito-temporal region. In contrast, no sex differences were found for vMMN responses in the fearful-neutral context. These findings indicated a female advantage dynamically in the automatic neural processing of facial expressions during a happy-neutral context.


Localization of Impaired Kinesthetic Processing Post-stroke.

  • Jeffrey M Kenzie‎ et al.
  • Frontiers in human neuroscience‎
  • 2016‎

Kinesthesia is our sense of limb motion, and allows us to gauge the speed, direction, and amplitude of our movements. Over half of stroke survivors have significant impairments in kinesthesia, which leads to greatly reduced recovery and function in everyday activities. Despite the high reported incidence of kinesthetic deficits after stroke, very little is known about how damage beyond just primary somatosensory areas affects kinesthesia. Stroke provides an ideal model to examine structure-function relationships specific to kinesthetic processing, by comparing lesion location with behavioral impairment. To examine this relationship, we performed voxel-based lesion-symptom mapping and statistical region of interest analyses on a large sample of sub-acute stroke subjects (N = 142) and compared kinesthetic performance with stroke lesion location. Subjects with first unilateral, ischemic stroke underwent neuroimaging and a comprehensive robotic kinesthetic assessment (~9 days post-stroke). The robotic exoskeleton measured subjects' ability to perform a kinesthetic mirror-matching task of the upper limbs without vision. The robot moved the stroke-affected arm and subjects' mirror-matched the movement with the unaffected arm. We found that lesions both within and outside primary somatosensory cortex were associated with significant kinesthetic impairments. Further, sub-components of kinesthesia were associated with different lesion locations. Impairments in speed perception were primarily associated with lesions to the right post-central and supramarginal gyri whereas impairments in amplitude of movement perception were primarily associated with lesions in the right pre-central gyrus, anterior insula, and superior temporal gyrus. Impairments in perception of movement direction were associated with lesions to bilateral post-central and supramarginal gyri, right superior temporal gyrus and parietal operculum. All measures of impairment shared a common association with damage to the right supramarginal gyrus. These results suggest that processing of kinesthetic information occurs beyond traditional sensorimotor areas. Additionally, this dissociation between kinesthetic sub-components may indicate specialized processing in these brain areas that form a larger distributed network.


Repetition enhancement and perceptual processing of visual word form.

  • Karine Lebreton‎ et al.
  • Frontiers in human neuroscience‎
  • 2012‎

The current study investigated the cerebral basis of word perceptual repetition priming with fMRI during a letter detection task that manipulated the familiarity of perceptual word form and the number of repetitions. Some neuroimaging studies have reported increases, instead of decreases, in brain activations (called "repetition enhancement") associated with repetition priming of unfamiliar stimuli which have been interpreted as the creation of new perceptual representations for unfamiliar items. According to this interpretation, several repetitions of unfamiliar items would then be necessary for the repetition priming to occur, a hypothesis not explicitly tested in prior studies. In the present study, using a letter detection task on briefly flashed words, we explored the effect of familiarity on brain response for word visual perceptual priming using both words with usual (i.e., familiar) and unusual (i.e., unfamiliar) font, presented up to four times for stimuli with unusual font. This allows potential changes in the brain responses for unfamiliar items to be assessed over several repetitions, i.e., repetition enhancement to suppression. Our results reveal significant increases of activity in the bilateral occipital areas related to repetition of words in both familiar and unfamiliar conditions. Our findings support the sharpening hypothesis, showing a lack of cerebral economy with repetition when the task requires the processing of all word features, whatever the familiarity of the material, and emphasize the influence of the nature of stimuli processing on its neuronal manifestation.


Distributed Neural Processing Predictors of Multi-dimensional Properties of Affect.

  • Keith A Bush‎ et al.
  • Frontiers in human neuroscience‎
  • 2017‎

Recent evidence suggests that emotions have a distributed neural representation, which has significant implications for our understanding of the mechanisms underlying emotion regulation and dysregulation as well as the potential targets available for neuromodulation-based emotion therapeutics. This work adds to this evidence by testing the distribution of neural representations underlying the affective dimensions of valence and arousal using representational models that vary in both the degree and the nature of their distribution. We used multi-voxel pattern classification (MVPC) to identify whole-brain patterns of functional magnetic resonance imaging (fMRI)-derived neural activations that reliably predicted dimensional properties of affect (valence and arousal) for visual stimuli viewed by a normative sample (n = 32) of demographically diverse, healthy adults. Inter-subject leave-one-out cross-validation showed whole-brain MVPC significantly predicted (p < 0.001) binarized normative ratings of valence (positive vs. negative, 59% accuracy) and arousal (high vs. low, 56% accuracy). We also conducted group-level univariate general linear modeling (GLM) analyses to identify brain regions whose response significantly differed for the contrasts of positive versus negative valence or high versus low arousal. Multivoxel pattern classifiers using voxels drawn from all identified regions of interest (all-ROIs) exhibited mixed performance; arousal was predicted significantly better than chance but worse than the whole-brain classifier, whereas valence was not predicted significantly better than chance. Multivoxel classifiers derived using individual ROIs generally performed no better than chance. Although performance of the all-ROI classifier improved with larger ROIs (generated by relaxing the clustering threshold), performance was still poorer than the whole-brain classifier. These findings support a highly distributed model of neural processing for the affective dimensions of valence and arousal. Finally, joint error analyses of the MVPC hyperplanes encoding valence and arousal identified regions within the dimensional affect space where multivoxel classifiers exhibited the greatest difficulty encoding brain states - specifically, stimuli of moderate arousal and high or low valence. In conclusion, we highlight new directions for characterizing affective processing for mechanistic and therapeutic applications in affective neuroscience.


Age-related dissociation of sensory and decision-based auditory motion processing.

  • Alexandra A Ludwig‎ et al.
  • Frontiers in human neuroscience‎
  • 2012‎

Studies on the maturation of auditory motion processing in children have yielded inconsistent reports. The present study combines subjective and objective measurements to investigate how the auditory perceptual abilities of children change during development and whether these changes are paralleled by changes in the event-related brain potential (ERP). We employed the mismatch negativity (MMN) to determine maturational changes in the discrimination of interaural time differences (ITDs) that generate lateralized moving auditory percepts. MMNs were elicited in children, teenagers, and adults, using a small and a large ITD at stimulus offset with respect to each subject's discrimination threshold. In adults and teenagers large deviants elicited prominent MMNs, whereas small deviants at the behavioral threshold elicited only a marginal or no MMN. In contrast, pronounced MMNs for both deviant sizes were found in children. Behaviorally, however, most of the children showed higher discrimination thresholds than teens and adults. Although automatic ITD detection is functional, active discrimination is still limited in children. The lack of MMN deviance dependency in children suggests that unlike in teenagers and adults, neural signatures of automatic auditory motion processing do not mirror discrimination abilities. The study critically accounts for advanced understanding of children's central auditory development.


The Force of Numbers: Investigating Manual Signatures of Embodied Number Processing.

  • Alex Miklashevsky‎ et al.
  • Frontiers in human neuroscience‎
  • 2020‎

The study has two objectives: (1) to introduce grip force recording as a new technique for studying embodied numerical processing; and (2) to demonstrate how three competing accounts of numerical magnitude representation can be tested by using this new technique: the Mental Number Line (MNL), A Theory of Magnitude (ATOM) and Embodied Cognition (finger counting-based) account. While 26 healthy adults processed visually presented single digits in a go/no-go n-back paradigm, their passive holding forces for two small sensors were recorded in both hands. Spontaneous and unconscious grip force changes related to number magnitude occurred in the left hand already 100-140 ms after stimulus presentation and continued systematically. Our results support a two-step model of number processing where an initial stage is related to the automatic activation of all stimulus properties whereas a later stage consists of deeper conscious processing of the stimulus. This interpretation generalizes previous work with linguistic stimuli and elaborates the timeline of embodied cognition. We hope that the use of grip force recording will advance the field of numerical cognition research.


Time course of information processing in visual and haptic object classification.

  • Jasna Martinovic‎ et al.
  • Frontiers in human neuroscience‎
  • 2012‎

Vision identifies objects rapidly and efficiently. In contrast, object recognition by touch is much slower. Furthermore, haptics usually serially accumulates information from different parts of objects, whereas vision typically processes object information in parallel. Is haptic object identification slower simply due to sequential information acquisition and the resulting memory load or due to more fundamental processing differences between the senses? To compare the time course of visual and haptic object recognition, we slowed visual processing using a novel, restricted viewing technique. In an electroencephalographic (EEG) experiment, participants discriminated familiar, nameable from unfamiliar, unnamable objects both visually and haptically. Analyses focused on the evoked and total fronto-central theta-band (5-7 Hz; a marker of working memory) and the occipital upper alpha-band (10-12 Hz; a marker of perceptual processing) locked to the onset of classification. Decreases in total upper alpha-band activity for haptic identification of objects indicate a likely processing role of multisensory extrastriate areas. Long-latency modulations of alpha-band activity differentiated between familiar and unfamiliar objects in haptics but not in vision. In contrast, theta-band activity showed a general increase over time for the slowed-down visual recognition task only. We conclude that haptic object recognition relies on common representations with vision but also that there are fundamental differences between the senses that do not merely arise from differences in their speed of processing.


Automatic Detection of Focal Cortical Dysplasia Type II in MRI: Is the Application of Surface-Based Morphometry and Machine Learning Promising?

  • Zohreh Ganji‎ et al.
  • Frontiers in human neuroscience‎
  • 2021‎

Focal cortical dysplasia (FCD) is a type of malformations of cortical development and one of the leading causes of drug-resistant epilepsy. Postoperative results improve the diagnosis of lesions on structural MRIs. Advances in quantitative algorithms have increased the identification of FCD lesions. However, due to significant differences in size, shape, and location of the lesion in different patients and a big deal of time for the objective diagnosis of lesion as well as the dependence of individual interpretation, sensitive approaches are required to address the challenge of lesion diagnosis. In this research, a FCD computer-aided diagnostic system to improve existing methods is presented.


Parallel processing in the brain's visual form system: an fMRI study.

  • Yoshihito Shigihara‎ et al.
  • Frontiers in human neuroscience‎
  • 2014‎

We here extend and complement our earlier time-based, magneto-encephalographic (MEG), study of the processing of forms by the visual brain (Shigihara and Zeki, 2013) with a functional magnetic resonance imaging (fMRI) study, in order to better localize the activity produced in early visual areas when subjects view simple geometric stimuli of increasing perceptual complexity (lines, angles, rhombuses) constituted from the same elements (lines). Our results show that all three categories of form activate all three visual areas with which we were principally concerned (V1-V3), with angles producing the strongest and rhombuses the weakest activity in all three. The difference between the activity produced by angles and rhombuses was significant, that between lines and rhombuses was trend significant while that between lines and angles was not. Taken together with our earlier MEG results, the present ones suggest that a parallel strategy is used in processing forms, in addition to the well-documented hierarchical strategy.


Children With Reading Difficulty Rely on Unimodal Neural Processing for Phonemic Awareness.

  • Melissa Randazzo‎ et al.
  • Frontiers in human neuroscience‎
  • 2019‎

Phonological awareness skills in children with reading difficulty (RD) may reflect impaired automatic integration of orthographic and phonological representations. However, little is known about the underlying neural mechanisms involved in phonological awareness for children with RD. Eighteen children with RD, ages 9-13, participated in a functional magnetic resonance imaging (fMRI) study designed to assess the relationship of two constructs of phonological awareness, phoneme synthesis, and phoneme analysis, with crossmodal rhyme judgment. Participants completed a rhyme judgment task presented in two modality conditions; unimodal auditory only and crossmodal audiovisual. Measures of phonological awareness were correlated with unimodal, but not crossmodal, lexical processing. Moreover, these relationships were found only in unisensory brain regions, and not in multisensory brain areas. The results of this study suggest that children with RD rely on unimodal representations and unisensory brain areas, and provide insight into the role of phonemic awareness in mapping between auditory and visual modalities during literacy acquisition.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: