Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

Brainstem-cortical functional connectivity for speech is differentially challenged by noise and reverberation.

Hearing research | 2018

Everyday speech perception is challenged by external acoustic interferences that hinder verbal communication. Here, we directly compared how different levels of the auditory system (brainstem vs. cortex) code speech and how their neural representations are affected by two acoustic stressors: noise and reverberation. We recorded multichannel (64 ch) brainstem frequency-following responses (FFRs) and cortical event-related potentials (ERPs) simultaneously in normal hearing individuals to speech sounds presented in mild and moderate levels of noise and reverb. We matched signal-to-noise and direct-to-reverberant ratios to equate the severity between classes of interference. Electrode recordings were parsed into source waveforms to assess the relative contribution of region-specific brain areas [i.e., brainstem (BS), primary auditory cortex (A1), inferior frontal gyrus (IFG)]. Results showed that reverberation was less detrimental to (and in some cases facilitated) the neural encoding of speech compared to additive noise. Inter-regional correlations revealed associations between BS and A1 responses, suggesting subcortical speech representations influence higher auditory-cortical areas. Functional connectivity analyses further showed that directed signaling toward A1 in both feedforward cortico-collicular (BS→A1) and feedback cortico-cortical (IFG→A1) pathways were strong predictors of degraded speech perception and differentiated "good" vs. "poor" perceivers. Our findings demonstrate a functional interplay within the brain's speech network that depends on the form and severity of acoustic interference. We infer that in addition to the quality of neural representations within individual brain regions, listeners' success at the "cocktail party" is modulated based on how information is transferred among subcortical and cortical hubs of the auditory-linguistic network.

Pubmed ID: 29871826 RIS Download

Research resources used in this publication

None found

Additional research tools detected in this publication

Antibodies used in this publication

None found

Associated grants

  • Agency: NIDCD NIH HHS, United States
    Id: R01 DC016267

Publication data is provided by the National Library of Medicine ® and PubMed ®. Data is retrieved from PubMed ® on a weekly schedule. For terms and conditions see the National Library of Medicine Terms and Conditions.

This is a list of tools and resources that we have found mentioned in this publication.


Low Resolution Electromagnetic Tomography (tool)

RRID:SCR_007077

Software package for functional imaging of human brain. Used to compute three dimensional distribution of electric neuronal activity from non-invasive measurements of scalp electric potential differences with high time resolution in millisecond range. Non-invasive intracranial time series are used for studying functional dynamic connectivity.. Current software version includes two new, improved variants of the original method: standardized (sLORETA) and exact (eLORETA). The new methods are characterized by exact localization when tested with point sources. Due to the fact that these methods are multivariate tomographies that are solutions to the inverse EEG problem, and that they are linear in nature, they will produce a low spatial resolution image for any distribution of activity. This property is not shared by naive one-at-a-time single dipole techniques.

View all literature mentions