Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus.

The Journal of neuroscience : the official journal of the Society for Neuroscience | 2017

Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS.SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices.

Pubmed ID: 28179553 RIS Download

Additional research tools detected in this publication

None found

Antibodies used in this publication

None found

Associated grants

  • Agency: NIDCD NIH HHS, United States
    Id: F30 DC014911
  • Agency: NINDS NIH HHS, United States
    Id: R01 NS065395

Publication data is provided by the National Library of Medicine ® and PubMed ®. Data is retrieved from PubMed ® on a weekly schedule. For terms and conditions see the National Library of Medicine Terms and Conditions.

This is a list of tools and resources that we have found mentioned in this publication.


Analysis of Functional NeuroImages (software resource)

RRID:SCR_005927

Set of (mostly) C programs that run on X11+Unix-based platforms (Linux, Mac OS X, Solaris, etc.) for processing, analyzing, and displaying functional MRI (FMRI) data defined over 3D volumes and over 2D cortical surface meshes. AFNI is freely distributed as source code plus some precompiled binaries.

View all literature mentions

FreeSurfer (software resource)

RRID:SCR_001847

Open source software suite for processing and analyzing human brain MRI images. Used for reconstruction of brain cortical surface from structural MRI data, and overlay of functional MRI data onto reconstructed surface. Contains automatic structural imaging stream for processing cross sectional and longitudinal data. Provides anatomical analysis tools, including: representation of cortical surface between white and gray matter, representation of the pial surface, segmentation of white matter from rest of brain, skull stripping, B1 bias field correction, nonlinear registration of cortical surface of individual with stereotaxic atlas, labeling of regions of cortical surface, statistical analysis of group morphometry differences, and labeling of subcortical brain structures.Operating System: Linux, macOS.

View all literature mentions

Psychophysics Toolbox (software resource)

RRID:SCR_002881

A free set of Matlab and GNU/Octave functions for vision research. It makes it easy to synthesize and show accurately controlled visual and auditory stimuli and interact with the observer.

View all literature mentions

MATLAB (software resource)

RRID:SCR_001622

Multi paradigm numerical computing environment and fourth generation programming language developed by MathWorks. Allows matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages, including C, C++, Java, Fortran and Python. Used to explore and visualize ideas and collaborate across disciplines including signal and image processing, communications, control systems, and computational finance.

View all literature mentions