Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

Auditory, Visual and Audiovisual Speech Processing Streams in Superior Temporal Sulcus.

Frontiers in human neuroscience | 2017

The human superior temporal sulcus (STS) is responsive to visual and auditory information, including sounds and facial cues during speech recognition. We investigated the functional organization of STS with respect to modality-specific and multimodal speech representations. Twenty younger adult participants were instructed to perform an oddball detection task and were presented with auditory, visual, and audiovisual speech stimuli, as well as auditory and visual nonspeech control stimuli in a block fMRI design. Consistent with a hypothesized anterior-posterior processing gradient in STS, auditory, visual and audiovisual stimuli produced the largest BOLD effects in anterior, posterior and middle STS (mSTS), respectively, based on whole-brain, linear mixed effects and principal component analyses. Notably, the mSTS exhibited preferential responses to multisensory stimulation, as well as speech compared to nonspeech. Within the mid-posterior and mSTS regions, response preferences changed gradually from visual, to multisensory, to auditory moving posterior to anterior. Post hoc analysis of visual regions in the posterior STS revealed that a single subregion bordering the mSTS was insensitive to differences in low-level motion kinematics yet distinguished between visual speech and nonspeech based on multi-voxel activation patterns. These results suggest that auditory and visual speech representations are elaborated gradually within anterior and posterior processing streams, respectively, and may be integrated within the mSTS, which is sensitive to more abstract speech information within and across presentation modalities. The spatial organization of STS is consistent with processing streams that are hypothesized to synthesize perceptual speech representations from sensory signals that provide convergent information from visual and auditory modalities.

Pubmed ID: 28439236 RIS Download

Research resources used in this publication

None found

Antibodies used in this publication

None found

Associated grants

None

Publication data is provided by the National Library of Medicine ® and PubMed ®. Data is retrieved from PubMed ® on a weekly schedule. For terms and conditions see the National Library of Medicine Terms and Conditions.

This is a list of tools and resources that we have found mentioned in this publication.


Analysis of Functional NeuroImages (tool)

RRID:SCR_005927

Set of (mostly) C programs that run on X11+Unix-based platforms (Linux, Mac OS X, Solaris, etc.) for processing, analyzing, and displaying functional MRI (FMRI) data defined over 3D volumes and over 2D cortical surface meshes. AFNI is freely distributed as source code plus some precompiled binaries.

View all literature mentions

NeuroSynth (tool)

RRID:SCR_006798

Platform for large-scale, automated synthesis of functional magnetic resonance imaging (fMRI) data extracted from published articles. It''s a website wrapped around a set of open-source Python and JavaScript packages. Neurosynth lets you run crude but useful analyses of fMRI data on a very large scale. You can: * Interactively visualize the results of over 3,000 term-based meta-analyses * Select specific locations in the human brain and view associated terms * Browse through the nearly 10,000 studies in the database Their ultimate goal is to enable dynamic real-time analysis, so that you''ll be able to select foci, tables, or entire studies for analysis and run a full-blown meta-analysis without leaving your browser. You''ll also be able to do things like upload entirely new images and obtain probabilistic estimates of the cognitive states most likely to be associated with the image.

View all literature mentions

Cogent 2000 (tool)

RRID:SCR_015672

MATLAB Toolbox for presenting stimuli and recording responses with precise timing. It also provides additional utilities for the manipulation of sound, keyboard, mouse, joystick, serial port, parallel port, subject responses and physiological monitoring hardware.

View all literature mentions