Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 1 showing 1 ~ 20 papers out of 29 papers

Speech-evoked brain activity is more robust to competing speech when it is spoken by someone familiar.

  • Emma Holmes‎ et al.
  • NeuroImage‎
  • 2021‎

When speech is masked by competing sound, people are better at understanding what is said if the talker is familiar compared to unfamiliar. The benefit is robust, but how does processing of familiar voices facilitate intelligibility? We combined high-resolution fMRI with representational similarity analysis to quantify the difference in distributed activity between clear and masked speech. We demonstrate that brain representations of spoken sentences are less affected by a competing sentence when they are spoken by a friend or partner than by someone unfamiliar-effectively, showing a cortical signal-to-noise ratio (SNR) enhancement for familiar voices. This effect correlated with the familiar-voice intelligibility benefit. We functionally parcellated auditory cortex, and found that the most prominent familiar-voice advantage was manifest along the posterior superior and middle temporal gyri. Overall, our results demonstrate that experience-driven improvements in intelligibility are associated with enhanced multivariate pattern activity in posterior temporal cortex.


Motor speech perception modulates the cortical language areas.

  • Julius Fridriksson‎ et al.
  • NeuroImage‎
  • 2008‎

Traditionally, the left frontal and parietal lobes have been associated with language production while regions in the temporal lobe are seen as crucial for language comprehension. However, recent evidence suggests that the classical language areas constitute an integrated network where each area plays a crucial role both in speech production and perception. We used functional MRI to examine whether observing speech motor movements (without auditory speech) relative to non-speech motor movements preferentially activates the cortical speech areas. Furthermore, we tested whether the activation in these regions was modulated by task difficulty. This dissociates between areas that are actively involved with speech perception from regions that show an obligatory activation in response to speech movements (e.g. areas that automatically activate in preparation for a motoric response). Specifically, we hypothesized that regions involved with decoding oral speech would show increasing activation with increasing difficulty. We found that speech movements preferentially activate the frontal and temporal language areas. In contrast, non-speech movements preferentially activate the parietal region. Degraded speech stimuli increased both frontal and parietal lobe activity but did not differentially excite the temporal region. These findings suggest that the frontal language area plays a role in visual speech perception and highlight the differential roles of the classical speech and language areas in processing others' motor speech movements.


Sensorimotor impairment of speech auditory feedback processing in aphasia.

  • Roozbeh Behroozmand‎ et al.
  • NeuroImage‎
  • 2018‎

We investigated the brain network involved in speech sensorimotor processing by studying patients with post-stroke aphasia using an altered auditory feedback (AAF) paradigm. We combined lesion-symptom-mapping analysis and behavioral testing to examine the pervasiveness of speech sensorimotor deficits and their relationship with cortical damage. Sixteen participants with aphasia and sixteen neurologically intact individuals completed a speech task under AAF. The task involved producing speech vowel sounds under the real-time pitch-shifted auditory feedback alteration. This task provided an objective measure for each individual's ability to compensate for mismatch (error) in speech auditory feedback. Results indicated that compensatory speech responses to AAF were significantly diminished in participants with aphasia compared with control. We observed that within the aphasic group, subjects with lower scores on the speech repetition task exhibited greater degree of diminished responses. Lesion-symptom-mapping analysis revealed that the onset phase (50-150 ms) of diminished AAF responses were predicted by damage to auditory cortical regions within the superior and middle temporal gyrus, whereas the rising phase (150-250 ms) and the peak (250-350 ms) of diminished AAF responses were predicted with damage to the inferior frontal gyrus and supramarginal gyrus areas, respectively. These findings suggest that damage to the auditory, motor, and auditory-motor integration networks are associated with impaired sensorimotor function for speech error processing. We suggest that a sensorimotor integration network, as revealed by brain regions related to temporal specific components of AAF responses, is related to speech processing and specific aspects of speech impairment, notably repetition deficits, in individuals with aphasia.


Neural recruitment for the production of native and novel speech sounds.

  • Dana Moser‎ et al.
  • NeuroImage‎
  • 2009‎

Two primary areas of damage have been implicated in apraxia of speech (AOS) based on the time post-stroke: (1) the left inferior frontal gyrus (IFG) in acute patients, and (2) the left anterior insula (aIns) in chronic patients. While AOS is widely characterized as a disorder in motor speech planning, little is known about the specific contributions of each of these regions in speech. The purpose of this study was to investigate cortical activation during speech production with a specific focus on the aIns and the IFG in normal adults. While undergoing sparse fMRI, 30 normal adults completed a 30-minute speech-repetition task consisting of three-syllable nonwords that contained either (a) English (native) syllables or (b) non-English (novel) syllables. When the novel syllable productions were compared to the native syllable productions, greater neural activation was observed in the aIns and IFG, particularly during the first 10 min of the task when novelty was the greatest. Although activation in the aIns remained high throughout the task for novel productions, greater activation was clearly demonstrated when the initial 10 min was compared to the final 10 min of the task. These results suggest increased activity within an extensive neural network, including the aIns and IFG, when the motor speech system is taxed, such as during the production of novel speech. We speculate that the amount of left aIns recruitment during speech production may be related to the internal construction of the motor speech unit such that the degree of novelty/automaticity would result in more or less demands respectively. The role of the IFG as a storehouse and integrative processor for previously acquired routines is also discussed.


Perceptual warping exposes categorical representations for speech in human brainstem responses.

  • Jared A Carter‎ et al.
  • NeuroImage‎
  • 2023‎

The brain transforms continuous acoustic events into discrete category representations to downsample the speech signal for our perceptual-cognitive systems. Such phonetic categories are highly malleable, and their percepts can change depending on surrounding stimulus context. Previous work suggests these acoustic-phonetic mapping and perceptual warping of speech emerge in the brain no earlier than auditory cortex. Here, we examined whether these auditory-category phenomena inherent to speech perception occur even earlier in the human brain, at the level of auditory brainstem. We recorded speech-evoked frequency following responses (FFRs) during a task designed to induce more/less warping of listeners' perceptual categories depending on stimulus presentation order of a speech continuum (random, forward, backward directions). We used a novel clustered stimulus paradigm to rapidly record the high trial counts needed for FFRs concurrent with active behavioral tasks. We found serial stimulus order caused perceptual shifts (hysteresis) near listeners' category boundary confirming identical speech tokens are perceived differentially depending on stimulus context. Critically, we further show neural FFRs during active (but not passive) listening are enhanced for prototypical vs. category-ambiguous tokens and are biased in the direction of listeners' phonetic label even for acoustically-identical speech stimuli. These findings were not observed in the stimulus acoustics nor model FFR responses generated via a computational model of cochlear and auditory nerve transduction, confirming a central origin to the effects. Our data reveal FFRs carry category-level information and suggest top-down processing actively shapes the neural encoding and categorization of speech at subcortical levels. These findings suggest the acoustic-phonetic mapping and perceptual warping in speech perception occur surprisingly early along the auditory neuroaxis, which might aid understanding by reducing ambiguity inherent to the speech signal.


Attention reinforces human corticofugal system to aid speech perception in noise.

  • Caitlin N Price‎ et al.
  • NeuroImage‎
  • 2021‎

Perceiving speech-in-noise (SIN) demands precise neural coding between brainstem and cortical levels of the hearing system. Attentional processes can then select and prioritize task-relevant cues over competing background noise for successful speech perception. In animal models, brainstem-cortical interplay is achieved via descending corticofugal projections from cortex that shape midbrain responses to behaviorally-relevant sounds. Attentional engagement of corticofugal feedback may assist SIN understanding but has never been confirmed and remains highly controversial in humans. To resolve these issues, we recorded source-level, anatomically constrained brainstem frequency-following responses (FFRs) and cortical event-related potentials (ERPs) to speech via high-density EEG while listeners performed rapid SIN identification tasks. We varied attention with active vs. passive listening scenarios whereas task difficulty was manipulated with additive noise interference. Active listening (but not arousal-control tasks) exaggerated both ERPs and FFRs, confirming attentional gain extends to lower subcortical levels of speech processing. We used functional connectivity to measure the directed strength of coupling between levels and characterize "bottom-up" vs. "top-down" (corticofugal) signaling within the auditory brainstem-cortical pathway. While attention strengthened connectivity bidirectionally, corticofugal transmission disengaged under passive (but not active) SIN listening. Our findings (i) show attention enhances the brain's transcription of speech even prior to cortex and (ii) establish a direct role of the human corticofugal feedback system as an aid to cocktail party speech perception.


Pre- and post-target cortical processes predict speech-in-noise performance.

  • Subong Kim‎ et al.
  • NeuroImage‎
  • 2021‎

Understanding speech in noise (SiN) is a complex task that recruits multiple cortical subsystems. There is a variance in individuals' ability to understand SiN that cannot be explained by simple hearing profiles, which suggests that central factors may underlie the variance in SiN ability. Here, we elucidated a few cortical functions involved during a SiN task and their contributions to individual variance using both within- and across-subject approaches. Through our within-subject analysis of source-localized electroencephalography, we investigated how acoustic signal-to-noise ratio (SNR) alters cortical evoked responses to a target word across the speech recognition areas, finding stronger responses in left supramarginal gyrus (SMG, BA40 the dorsal lexicon area) with quieter noise. Through an individual differences approach, we found that listeners show different neural sensitivity to the background noise and target speech, reflected in the amplitude ratio of earlier auditory-cortical responses to speech and noise, named as an internal SNR. Listeners with better internal SNR showed better SiN performance. Further, we found that the post-speech time SMG activity explains a further amount of variance in SiN performance that is not accounted for by internal SNR. This result demonstrates that at least two cortical processes contribute to SiN performance independently: pre-target time processing to attenuate neural representation of background noise and post-target time processing to extract information from speech sounds.


Differentiation of speech-induced artifacts from physiological high gamma activity in intracranial recordings.

  • Alan Bush‎ et al.
  • NeuroImage‎
  • 2022‎

There is great interest in identifying the neurophysiological underpinnings of speech production. Deep brain stimulation (DBS) surgery is unique in that it allows intracranial recordings from both cortical and subcortical regions in patients who are awake and speaking. The quality of these recordings, however, may be affected to various degrees by mechanical forces resulting from speech itself. Here we describe the presence of speech-induced artifacts in local-field potential (LFP) recordings obtained from mapping electrodes, DBS leads, and cortical electrodes. In addition to expected physiological increases in high gamma (60-200 Hz) activity during speech production, time-frequency analysis in many channels revealed a narrowband gamma component that exhibited a pattern similar to that observed in the speech audio spectrogram. This component was present to different degrees in multiple types of neural recordings. We show that this component tracks the fundamental frequency of the participant's voice, correlates with the power spectrum of speech and has coherence with the produced speech audio. A vibration sensor attached to the stereotactic frame recorded speech-induced vibrations with the same pattern observed in the LFPs. No corresponding component was identified in any neural channel during the listening epoch of a syllable repetition task. These observations demonstrate how speech-induced vibrations can create artifacts in the primary frequency band of interest. Identifying and accounting for these artifacts is crucial for establishing the validity and reproducibility of speech-related data obtained from intracranial recordings during DBS surgery.


Perception drives production across sensory modalities: A network for sensorimotor integration of visual speech.

  • Jonathan H Venezia‎ et al.
  • NeuroImage‎
  • 2016‎

Sensory information is critical for movement control, both for defining the targets of actions and providing feedback during planning or ongoing movements. This holds for speech motor control as well, where both auditory and somatosensory information have been shown to play a key role. Recent clinical research demonstrates that individuals with severe speech production deficits can show a dramatic improvement in fluency during online mimicking of an audiovisual speech signal suggesting the existence of a visuomotor pathway for speech motor control. Here we used fMRI in healthy individuals to identify this new visuomotor circuit for speech production. Participants were asked to perceive and covertly rehearse nonsense syllable sequences presented auditorily, visually, or audiovisually. The motor act of rehearsal, which is prima facie the same whether or not it is cued with a visible talker, produced different patterns of sensorimotor activation when cued by visual or audiovisual speech (relative to auditory speech). In particular, a network of brain regions including the left posterior middle temporal gyrus and several frontoparietal sensorimotor areas activated more strongly during rehearsal cued by a visible talker versus rehearsal cued by auditory speech alone. Some of these brain regions responded exclusively to rehearsal cued by visual or audiovisual speech. This result has significant implications for models of speech motor control, for the treatment of speech output disorders, and for models of the role of speech gesture imitation in development.


Inner speech is accompanied by a temporally-precise and content-specific corollary discharge.

  • Bradley N Jack‎ et al.
  • NeuroImage‎
  • 2019‎

When we move our articulator organs to produce overt speech, the brain generates a corollary discharge that acts to suppress the neural and perceptual responses to our speech sounds. Recent research suggests that inner speech - the silent production of words in one's mind - is also accompanied by a corollary discharge. Here, we show that this corollary discharge contains information about the temporal and physical properties of inner speech. In two experiments, participants produced an inner phoneme at a precisely-defined moment in time. An audible phoneme was presented 300 ms before, concurrently with, or 300 ms after participants produced the inner phoneme. We found that producing the inner phoneme attenuated the N1 component of the event-related potential - an index of auditory cortex processing - but only when the inner and audible phonemes occurred concurrently and matched on content. If the audible phoneme was presented before or after the production of the inner phoneme, or if the inner phoneme did not match the content of the audible phoneme, there was no attenuation of the N1. These results suggest that inner speech is accompanied by a temporally-precise and content-specific corollary discharge. We conclude that these results support the notion of a functional equivalence between the neural processes that underlie the production of inner and overt speech, and may provide a platform for identifying inner speech abnormalities in disorders in which they have been putatively associated, such as schizophrenia.


Interaction matters: A perceived social partner alters the neural processing of human speech.

  • Katherine Rice‎ et al.
  • NeuroImage‎
  • 2016‎

Mounting evidence suggests that social interaction changes how communicative behaviors (e.g., spoken language, gaze) are processed, but the precise neural bases by which social-interactive context may alter communication remain unknown. Various perspectives suggest that live interactions are more rewarding, more attention-grabbing, or require increased mentalizing-thinking about the thoughts of others. Dissociating between these possibilities is difficult because most extant neuroimaging paradigms examining social interaction have not directly compared live paradigms to conventional "offline" (or recorded) paradigms. We developed a novel fMRI paradigm to assess whether and how an interactive context changes the processing of speech matched in content and vocal characteristics. Participants listened to short vignettes--which contained no reference to people or mental states--believing that some vignettes were prerecorded and that others were presented over a real-time audio-feed by a live social partner. In actuality, all speech was prerecorded. Simply believing that speech was live increased activation in each participant's own mentalizing regions, defined using a functional localizer. Contrasting live to recorded speech did not reveal significant differences in attention or reward regions. Further, higher levels of autistic-like traits were associated with altered neural specialization for live interaction. These results suggest that humans engage in ongoing mentalizing about social partners, even when such mentalizing is not explicitly required, illustrating how social context shapes social cognition. Understanding communication in social context has important implications for typical and atypical social processing, especially for disorders like autism where social difficulties are more acute in live interaction.


Brain anatomy differences in childhood stuttering.

  • Soo-Eun Chang‎ et al.
  • NeuroImage‎
  • 2008‎

Stuttering is a developmental speech disorder that occurs in 5% of children with spontaneous remission in approximately 70% of cases. Previous imaging studies in adults with persistent stuttering found left white matter deficiencies and reversed right-left asymmetries compared to fluent controls. We hypothesized that similar differences might be present indicating brain development differences in children at risk of stuttering. Optimized voxel-based morphometry compared gray matter volume (GMV) and diffusion tensor imaging measured fractional anisotropy (FA) in white matter tracts in 3 groups: children with persistent stuttering, children recovered from stuttering, and fluent peers. Both the persistent stuttering and recovered groups had reduced GMV from normal in speech-relevant regions: the left inferior frontal gyrus and bilateral temporal regions. Reduced FA was found in the left white matter tracts underlying the motor regions for face and larynx in the persistent stuttering group. Contrary to previous findings in adults who stutter, no increases were found in the right hemisphere speech regions in stuttering or recovered children and no differences in right-left asymmetries. Instead, a risk for childhood stuttering was associated with deficiencies in left gray matter volume while reduced white matter integrity in the left hemisphere speech system was associated with persistent stuttering. Anatomical increases in right hemisphere structures previously found in adults who stutter may have resulted from a lifetime of stuttering. These findings point to the importance of considering the role of neuroplasticity during development when studying persistent forms of developmental disorders in adults.


Abnormal time course of low beta modulation in non-fluent preschool children: A magnetoencephalographic study of rhythm tracking.

  • Andrew C Etchell‎ et al.
  • NeuroImage‎
  • 2016‎

Stuttering is a disorder of speech affecting millions of people around the world. Whilst the exact aetiology of stuttering remains unknown, it has been hypothesised that it is a disorder of the neural mechanisms that support speech timing. In this article, we used magnetoencephalography (MEG) to examine activity from auditory regions of the brain in stuttering and non-stuttering children aged 3-9years. For typically developing children, we found that MEG oscillations in the beta band responded to rhythmic sounds with a peak near the time of stimulus onset. In contrast, stuttering children showed an opposite phase of beta band envelope, with a trough of activity at stimulus onset. These results suggest that stuttering may result from abnormalities in predictive brain responses which are reflected in abnormal entrainment of the beta band envelope to rhythmic sounds.


Reading, hearing, and the planum temporale.

  • Bradley R Buchsbaum‎ et al.
  • NeuroImage‎
  • 2005‎

Many neuroimaging studies of single-word reading have been carried out over the last 15 years, and a consensus as to the brain regions relevant to this task has emerged. Surprisingly, the planum temporale (PT) does not appear among the catalog of consistently active regions in these investigations. Recently, however, several studies have offered evidence suggesting that the left posteromedial PT plays a role in both speech production and speech perception. It is not clear, then, why so many neuroimaging studies of single-word reading--a task requiring speech production--have tended not to find evidence of PT involvement. In the present work, we employed a high-powered rapid event-related fMRI paradigm involving both single pseudoword reading and single pseudoword listening to assess activity related to reading and speech perception in the PT as a function of the degree of spatial smoothing applied to the functional images. We show that the speech area of the PT [Sylvian-parietal-temporal (Spt)] is best identified when only a moderate (5 mm) amount of spatial smoothing is applied to the data before statistical analysis. Moreover, increasing the smoothing window to 10 mm obliterates activation in the PT, suggesting that failure to find PT activation in past studies may relate to this factor.


Strategies for longitudinal neuroimaging studies of overt language production.

  • Jed A Meltzer‎ et al.
  • NeuroImage‎
  • 2009‎

Longitudinal fMRI studies of language production are of interest for evaluating recovery from post-stroke aphasia, but numerous methodological issues remain unresolved, particularly regarding strategies for evaluating single subjects at multiple timepoints. To address these issues, we studied overt picture naming in eleven healthy subjects, scanned four times each at one-month intervals. To evaluate the natural variability present across repeated sessions, repeated scans were directly contrasted in a unified statistical framework on a per-voxel basis. The effect of stimulus familiarity was evaluated using explicitly overtrained pictures, novel pictures, and untrained pictures that were repeated across sessions. For untrained pictures, we found that activation declined across multiple sessions, equally for both novel and repeated stimuli. Thus, no repetition priming for individual stimuli at one-month intervals was found, but rather a general effect of task habituation was present. Using a set of overtrained pictures identical in each session, no decline was found, but activation was minimized and produced less consistent patterns across participants, as measured by intra-class correlation coefficients. Subtraction of a baseline task, in which subjects produced a stereotyped utterance to scrambled pictures, resulted in specific activations in the left inferior frontal gyrus and other language areas for untrained items, while overlearned stimuli relative to pseudo pictures activated only the fusiform gyrus and supplementary motor area. These findings indicate that longitudinal fMRI is an effective means of detecting changes in neural activation magnitude over time, as long as the effect of task habituation is taken into account.


Functional differentiation in the language network revealed by lesion-symptom mapping.

  • William Matchin‎ et al.
  • NeuroImage‎
  • 2022‎

Theories of language organization in the brain commonly posit that different regions underlie distinct linguistic mechanisms. However, such theories have been criticized on the grounds that many neuroimaging studies of language processing find similar effects across regions. Moreover, condition by region interaction effects, which provide the strongest evidence of functional differentiation between regions, have rarely been offered in support of these theories. Here we address this by using lesion-symptom mapping in three large, partially-overlapping groups of aphasia patients with left hemisphere brain damage due to stroke (N = 121, N = 92, N = 218). We identified multiple measure by region interaction effects, associating damage to the posterior middle temporal gyrus with syntactic comprehension deficits, damage to posterior inferior frontal gyrus with expressive agrammatism, and damage to inferior angular gyrus with semantic category word fluency deficits. Our results are inconsistent with recent hypotheses that regions of the language network are undifferentiated with respect to high-level linguistic processing.


Neural correlates of impaired vocal feedback control in post-stroke aphasia.

  • Roozbeh Behroozmand‎ et al.
  • NeuroImage‎
  • 2022‎

We used left-hemisphere stroke as a model to examine how damage to sensorimotor brain networks impairs vocal auditory feedback processing and control. Individuals with post-stroke aphasia and matched neurotypical control subjects vocalized speech vowel sounds and listened to the playback of their self-produced vocalizations under normal (NAF) and pitch-shifted altered auditory feedback (AAF) while their brain activity was recorded using electroencephalography (EEG) signals. Event-related potentials (ERPs) were utilized as a neural index to probe the effect of vocal production on auditory feedback processing with high temporal resolution, while lesion data in the stroke group was used to determine how brain abnormality accounted for the impairment of such mechanisms. Results revealed that ERP activity was aberrantly modulated during vocalization vs. listening in aphasia, and this effect was accompanied by the reduced magnitude of compensatory vocal responses to pitch-shift alterations in the auditory feedback compared with control subjects. Lesion-mapping revealed that the aberrant pattern of ERP modulation in response to NAF was accounted for by damage to sensorimotor networks within the left-hemisphere inferior frontal, precentral, inferior parietal, and superior temporal cortices. For responses to AAF, neural deficits were predicted by damage to a distinguishable network within the inferior frontal and parietal cortices. These findings define the left-hemisphere sensorimotor networks implicated in auditory feedback processing, error detection, and vocal motor control. Our results provide translational synergy to inform the theoretical models of sensorimotor integration while having clinical applications for diagnosis and treatment of communication disabilities in individuals with stroke and other neurological conditions.


Action planning and predictive coding when speaking.

  • Jun Wang‎ et al.
  • NeuroImage‎
  • 2014‎

Across the animal kingdom, sensations resulting from an animal's own actions are processed differently from sensations resulting from external sources, with self-generated sensations being suppressed. A forward model has been proposed to explain this process across sensorimotor domains. During vocalization, reduced processing of one's own speech is believed to result from a comparison of speech sounds to corollary discharges of intended speech production generated from efference copies of commands to speak. Until now, anatomical and functional evidence validating this model in humans has been indirect. Using EEG with anatomical MRI to facilitate source localization, we demonstrate that inferior frontal gyrus activity during the 300ms before speaking was associated with suppressed processing of speech sounds in auditory cortex around 100ms after speech onset (N1). These findings indicate that an efference copy from speech areas in prefrontal cortex is transmitted to auditory cortex, where it is used to suppress processing of anticipated speech sounds. About 100ms after N1, a subsequent auditory cortical component (P2) was not suppressed during talking. The combined N1 and P2 effects suggest that although sensory processing is suppressed as reflected in N1, perceptual gaps may be filled as reflected in the lack of P2 suppression, explaining the discrepancy between sensory suppression and preserved sensory experiences. These findings, coupled with the coherence between relevant brain regions before and during speech, provide new mechanistic understanding of the complex interactions between action planning and sensory processing that provide for differentiated tagging and monitoring of one's own speech, processes disrupted in neuropsychiatric disorders.


Plasticity in auditory categorization is supported by differential engagement of the auditory-linguistic network.

  • Gavin M Bidelman‎ et al.
  • NeuroImage‎
  • 2019‎

To construct our perceptual world, the brain categorizes variable sensory cues into behaviorally-relevant groupings. Categorical representations are apparent within a distributed fronto-temporo-parietal brain network but how this neural circuitry is shaped by experience remains undefined. Here, we asked whether speech and music categories might be formed within different auditory-linguistic brain regions depending on listeners' auditory expertise. We recorded EEG in highly skilled (musicians) vs. less experienced (nonmusicians) perceivers as they rapidly categorized speech and musical sounds. Musicians showed perceptual enhancements across domains, yet source EEG data revealed a double dissociation in the neurobiological mechanisms supporting categorization between groups. Whereas musicians coded categories in primary auditory cortex (PAC), nonmusicians recruited non-auditory regions (e.g., inferior frontal gyrus, IFG) to generate category-level information. Functional connectivity confirmed nonmusicians' increased left IFG involvement reflects stronger routing of signal from PAC directed to IFG, presumably because sensory coding is insufficient to construct categories in less experienced listeners. Our findings establish auditory experience modulates specific engagement and inter-regional communication in the auditory-linguistic network supporting categorical perception. Whereas early canonical PAC representations are sufficient to generate categories in highly trained ears, less experienced perceivers broadcast information downstream to higher-order linguistic brain areas (IFG) to construct abstract sound labels.


Functional organization of human auditory cortex: investigation of response latencies through direct recordings.

  • Kirill V Nourski‎ et al.
  • NeuroImage‎
  • 2014‎

The model for functional organization of human auditory cortex is in part based on findings in non-human primates, where the auditory cortex is hierarchically delineated into core, belt and parabelt fields. This model envisions that core cortex directly projects to belt, but not to parabelt, whereas belt regions are a major source of direct input for auditory parabelt. In humans, the posteromedial portion of Heschl's gyrus (HG) represents core auditory cortex, whereas the anterolateral portion of HG and the posterolateral superior temporal gyrus (PLST) are generally interpreted as belt and parabelt, respectively. In this scheme, response latencies can be hypothesized to progress in serial fashion from posteromedial to anterolateral HG to PLST. We examined this hypothesis by comparing response latencies to multiple stimuli, measured across these regions using simultaneous intracranial recordings in neurosurgical patients. Stimuli were 100 Hz click trains and the speech syllable /da/. Response latencies were determined by examining event-related band power in the high gamma frequency range. The earliest responses in auditory cortex occurred in posteromedial HG. Responses elicited from sites in anterolateral HG were neither earlier in latency from sites on PLST, nor more robust. Anterolateral HG and PLST exhibited some preference for speech syllable stimuli compared to click trains. These findings are not supportive of a strict serial model envisioning principal flow of information along HG to PLST. In contrast, data suggest that a portion of PLST may represent a relatively early stage in the auditory cortical hierarchy.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: