Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 1 showing 1 ~ 20 papers out of 276 papers

Periodicity Pitch Perception.

  • Frank Klefenz‎ et al.
  • Frontiers in neuroscience‎
  • 2020‎

This study presents a computational model to reproduce the biological dynamics of "listening to music." A biologically plausible model of periodicity pitch detection is proposed and simulated. Periodicity pitch is computed across a range of the auditory spectrum. Periodicity pitch is detected from subsets of activated auditory nerve fibers (ANFs). These activate connected model octopus cells, which trigger model neurons detecting onsets and offsets; thence model interval-tuned neurons are innervated at the right interval times; and finally, a set of common interval-detecting neurons indicate pitch. Octopus cells rhythmically spike with the pitch periodicity of the sound. Batteries of interval-tuned neurons stopwatch-like measure the inter-spike intervals of the octopus cells by coding interval durations as first spike latencies (FSLs). The FSL-triggered spikes synchronously coincide through a monolayer spiking neural network at the corresponding receiver pitch neurons.


Neural patterns reveal single-trial information on absolute pitch and relative pitch perception.

  • Simon Leipold‎ et al.
  • NeuroImage‎
  • 2019‎

Pitch is a fundamental attribute of sounds and yet is not perceived equally by all humans. Absolute pitch (AP) musicians perceive, recognize, and name pitches in absolute terms, whereas relative pitch (RP) musicians, representing the large majority of musicians, perceive pitches in relation to other pitches. In this study, we used electroencephalography (EEG) to investigate the neural representations underlying tone listening and tone labeling in a large sample of musicians (n = 105). Participants performed a pitch processing task with a listening and a labeling condition during EEG acquisition. Using a brain-decoding framework, we tested a prediction derived from both theoretical and empirical accounts of AP, namely that the representational similarity of listening and labeling is higher in AP musicians than in RP musicians. Consistent with the prediction, time-resolved single-trial EEG decoding revealed a higher representational similarity in AP musicians during late stages of pitch perception. Time-frequency-resolved EEG decoding further showed that the higher representational similarity was present in oscillations in the theta and beta frequency bands. Supplemental univariate analyses were less sensitive in detecting subtle group differences in the frequency domain. Taken together, the results suggest differences between AP and RP musicians in late pitch processing stages associated with cognition, rather than in early processing stages associated with perception.


Musicianship Influences Language Effect on Musical Pitch Perception.

  • William Choi‎
  • Frontiers in psychology‎
  • 2021‎

Given its practical implications, the effect of musicianship on language learning has been vastly researched. Interestingly, growing evidence also suggests that language experience can facilitate music perception. However, the precise nature of this facilitation is not fully understood. To address this research gap, I investigated the interactive effect of language and musicianship on musical pitch and rhythmic perception. Cantonese and English listeners, each divided into musician and non-musician groups, completed the Musical Ear Test and the Raven's 2 Progressive Matrices. Essentially, an interactive effect of language and musicianship was found on musical pitch but not rhythmic perception. Consistent with previous studies, Cantonese language experience appeared to facilitate musical pitch perception. However, this facilitatory effect was only present among the non-musicians. Among the musicians, Cantonese language experience did not offer any perceptual advantage. The above findings reflect that musicianship influences the effect of language on musical pitch perception. Together with the previous findings, the new findings offer two theoretical implications for the OPERA hypothesis-bi-directionality and mechanisms through which language experience and musicianship interact in different domains.


Learning Pitch with STDP: A Computational Model of Place and Temporal Pitch Perception Using Spiking Neural Networks.

  • Nafise Erfanian Saeedi‎ et al.
  • PLoS computational biology‎
  • 2016‎

Pitch perception is important for understanding speech prosody, music perception, recognizing tones in tonal languages, and perceiving speech in noisy environments. The two principal pitch perception theories consider the place of maximum neural excitation along the auditory nerve and the temporal pattern of the auditory neurons' action potentials (spikes) as pitch cues. This paper describes a biophysical mechanism by which fine-structure temporal information can be extracted from the spikes generated at the auditory periphery. Deriving meaningful pitch-related information from spike times requires neural structures specialized in capturing synchronous or correlated activity from amongst neural events. The emergence of such pitch-processing neural mechanisms is described through a computational model of auditory processing. Simulation results show that a correlation-based, unsupervised, spike-based form of Hebbian learning can explain the development of neural structures required for recognizing the pitch of simple and complex tones, with or without the fundamental frequency. The temporal code is robust to variations in the spectral shape of the signal and thus can explain the phenomenon of pitch constancy.


Perception of words and pitch patterns in song and speech.

  • Julia Merrill‎ et al.
  • Frontiers in psychology‎
  • 2012‎

THIS FUNCTIONAL MAGNETIC RESONANCE IMAGING STUDY EXAMINES SHARED AND DISTINCT CORTICAL AREAS INVOLVED IN THE AUDITORY PERCEPTION OF SONG AND SPEECH AT THE LEVEL OF THEIR UNDERLYING CONSTITUENTS: words and pitch patterns. Univariate and multivariate analyses were performed to isolate the neural correlates of the word- and pitch-based discrimination between song and speech, corrected for rhythmic differences in both. Therefore, six conditions, arranged in a subtractive hierarchy were created: sung sentences including words, pitch and rhythm; hummed speech prosody and song melody containing only pitch patterns and rhythm; and as a control the pure musical or speech rhythm. Systematic contrasts between these balanced conditions following their hierarchical organization showed a great overlap between song and speech at all levels in the bilateral temporal lobe, but suggested a differential role of the inferior frontal gyrus (IFG) and intraparietal sulcus (IPS) in processing song and speech. While the left IFG coded for spoken words and showed predominance over the right IFG in prosodic pitch processing, an opposite lateralization was found for pitch in song. The IPS showed sensitivity to discrete pitch relations in song as opposed to the gliding pitch in speech. Finally, the superior temporal gyrus and premotor cortex coded for general differences between words and pitch patterns, irrespective of whether they were sung or spoken. Thus, song and speech share many features which are reflected in a fundamental similarity of brain areas involved in their perception. However, fine-grained acoustic differences on word and pitch level are reflected in the IPS and the lateralized activity of the IFG.


An auditory neural correlate suggests a mechanism underlying holistic pitch perception.

  • Daryl Wile‎ et al.
  • PloS one‎
  • 2007‎

Current theories of auditory pitch perception propose that cochlear place (spectral) and activity timing pattern (temporal) information are somehow combined within the brain to produce holistic pitch percepts, yet the neural mechanisms for integrating these two kinds of information remain obscure. To examine this process in more detail, stimuli made up of three pure tones whose components are individually resolved by the peripheral auditory system, but that nonetheless elicit a holistic, "missing fundamental" pitch percept, were played to human listeners. A technique was used to separate neural timing activity related to individual components of the tone complexes from timing activity related to an emergent feature of the complex (the envelope), and the region of the tonotopic map where information could originate from was simultaneously restricted by masking noise. Pitch percepts were mirrored to a very high degree by a simple combination of component-related and envelope-related neural responses with similar timing that originate within higher-frequency regions of the tonotopic map where stimulus components interact. These results suggest a coding scheme for holistic pitches whereby limited regions of the tonotopic map (spectral places) carrying envelope- and component-related activity with similar timing patterns selectively provide a key source of neural pitch information. A similar mechanism of integration between local and emergent object properties may contribute to holistic percepts in a variety of sensory systems.


Individual Differences in the Frequency-Following Response: Relation to Pitch Perception.

  • Emily B J Coffey‎ et al.
  • PloS one‎
  • 2016‎

The scalp-recorded frequency-following response (FFR) is a measure of the auditory nervous system's representation of periodic sound, and may serve as a marker of training-related enhancements, behavioural deficits, and clinical conditions. However, FFRs of healthy normal subjects show considerable variability that remains unexplained. We investigated whether the FFR representation of the frequency content of a complex tone is related to the perception of the pitch of the fundamental frequency. The strength of the fundamental frequency in the FFR of 39 people with normal hearing was assessed when they listened to complex tones that either included or lacked energy at the fundamental frequency. We found that the strength of the fundamental representation of the missing fundamental tone complex correlated significantly with people's general tendency to perceive the pitch of the tone as either matching the frequency of the spectral components that were present, or that of the missing fundamental. Although at a group level the fundamental representation in the FFR did not appear to be affected by the presence or absence of energy at the same frequency in the stimulus, the two conditions were statistically distinguishable for some subjects individually, indicating that the neural representation is not linearly dependent on the stimulus content. In a second experiment using a within-subjects paradigm, we showed that subjects can learn to reversibly select between either fundamental or spectral perception, and that this is accompanied both by changes to the fundamental representation in the FFR and to cortical-based gamma activity. These results suggest that both fundamental and spectral representations coexist, and are available for later auditory processing stages, the requirements of which may also influence their relative strength and thus modulate FFR variability. The data also highlight voluntary mode perception as a new paradigm with which to study top-down vs bottom-up mechanisms that support the emerging view of the FFR as the outcome of integrated processing in the entire auditory system.


A common computational principle for vibrotactile pitch perception in mouse and human.

  • Mario Prsa‎ et al.
  • Nature communications‎
  • 2021‎

We live surrounded by vibrations generated by moving objects. These oscillatory stimuli propagate through solid substrates, are sensed by mechanoreceptors in our body and give rise to perceptual attributes such as vibrotactile pitch (i.e. the perception of how high or low a vibration's frequency is). Here, we establish a mechanistic relationship between vibrotactile pitch perception and the physical properties of vibrations using behavioral tasks, in which vibratory stimuli were delivered to the human fingertip or the mouse forelimb. The resulting perceptual reports were analyzed with a model demonstrating that physically different combinations of vibration frequencies and amplitudes can produce equal pitch perception. We found that the perceptually indistinguishable but physically different stimuli follow a common computational principle in mouse and human. It dictates that vibrotactile pitch perception is shifted with increases in amplitude toward the frequency of highest vibrotactile sensitivity. These findings suggest the existence of a fundamental relationship between the seemingly unrelated concepts of spectral sensitivity and pitch perception.


Universal and Non-universal Features of Musical Pitch Perception Revealed by Singing.

  • Nori Jacoby‎ et al.
  • Current biology : CB‎
  • 2019‎

Musical pitch perception is argued to result from nonmusical biological constraints and thus to have similar characteristics across cultures, but its universality remains unclear. We probed pitch representations in residents of the Bolivian Amazon-the Tsimane', who live in relative isolation from Western culture-as well as US musicians and non-musicians. Participants sang back tone sequences presented in different frequency ranges. Sung responses of Amazonian and US participants approximately replicated heard intervals on a logarithmic scale, even for tones outside the singing range. Moreover, Amazonian and US reproductions both deteriorated for high-frequency tones even though they were fully audible. But whereas US participants tended to reproduce notes an integer number of octaves above or below the heard tones, Amazonians did not, ignoring the note "chroma" (C, D, etc.). Chroma matching in US participants was more pronounced in US musicians than non-musicians, was not affected by feedback, and was correlated with similarity-based measures of octave equivalence as well as the ability to match the absolute f0 of a stimulus in the singing range. The results suggest the cross-cultural presence of logarithmic scales for pitch, and biological constraints on the limits of pitch, but indicate that octave equivalence may be culturally contingent, plausibly dependent on pitch representations that develop from experience with particular musical systems. VIDEO ABSTRACT.


Perception and Modeling of Affective Qualities of Musical Instrument Sounds across Pitch Registers.

  • Stephen McAdams‎ et al.
  • Frontiers in psychology‎
  • 2017‎

Composers often pick specific instruments to convey a given emotional tone in their music, partly due to their expressive possibilities, but also due to their timbres in specific registers and at given dynamic markings. Of interest to both music psychology and music informatics from a computational point of view is the relation between the acoustic properties that give rise to the timbre at a given pitch and the perceived emotional quality of the tone. Musician and nonmusician listeners were presented with 137 tones produced at a fixed dynamic marking (forte) playing tones at pitch class D# across each instrument's entire pitch range and with different playing techniques for standard orchestral instruments drawn from the brass, woodwind, string, and pitched percussion families. They rated each tone on six analogical-categorical scales in terms of emotional valence (positive/negative and pleasant/unpleasant), energy arousal (awake/tired), tension arousal (excited/calm), preference (like/dislike), and familiarity. Linear mixed models revealed interactive effects of musical training, instrument family, and pitch register, with non-linear relations between pitch register and several dependent variables. Twenty-three audio descriptors from the Timbre Toolbox were computed for each sound and analyzed in two ways: linear partial least squares regression (PLSR) and nonlinear artificial neural net modeling. These two analyses converged in terms of the importance of various spectral, temporal, and spectrotemporal audio descriptors in explaining the emotion ratings, but some differences also emerged. Different combinations of audio descriptors make major contributions to the three emotion dimensions, suggesting that they are carried by distinct acoustic properties. Valence is more positive with lower spectral slopes, a greater emergence of strong partials, and an amplitude envelope with a sharper attack and earlier decay. Higher tension arousal is carried by brighter sounds, more spectral variation and more gentle attacks. Greater energy arousal is associated with brighter sounds, with higher spectral centroids and slower decrease of the spectral slope, as well as with greater spectral emergence. The divergences between linear and nonlinear approaches are discussed.


Enhanced perception of pitch changes in speech and music in early blind adults.

  • Laureline Arnaud‎ et al.
  • Neuropsychologia‎
  • 2018‎

It is well known that congenitally blind adults have enhanced auditory processing for some tasks. For instance, they show supra-normal capacity to perceive accelerated speech. However, only a few studies have investigated basic auditory processing in this population. In this study, we investigated if pitch processing enhancement in the blind is a domain-general or domain-specific phenomenon, and if pitch processing shares the same properties as in the sighted regarding how scores from different domains are associated. Fifteen congenitally blind adults and fifteen sighted adults participated in the study. We first created a set of personalized native and non-native vowel stimuli using an identification and rating task. Then, an adaptive discrimination paradigm was used to determine the frequency difference limen for pitch direction identification of speech (native and non-native vowels) and non-speech stimuli (musical instruments and pure tones). The results show that the blind participants had better discrimination thresholds than controls for native vowels, music stimuli, and pure tones. Whereas within the blind group, the discrimination thresholds were smaller for musical stimuli than speech stimuli, replicating previous findings in sighted participants, we did not find this effect in the current control group. Further analyses indicate that older sighted participants show higher thresholds for instrument sounds compared to speech sounds. This effect of age was not found in the blind group. Moreover, the scores across domains were not associated to the same extent in the blind as they were in the sighted. In conclusion, in addition to providing further evidence of compensatory auditory mechanisms in early blind individuals, our results point to differences in how auditory processing is modulated in this population.


MEG correlates of temporal regularity relevant to pitch perception in human auditory cortex.

  • Seung-Goo Kim‎ et al.
  • NeuroImage‎
  • 2022‎

We recorded neural responses in human participants to three types of pitch-evoking regular stimuli at rates below and above the lower limit of pitch using magnetoencephalography (MEG). These bandpass filtered (1-4 kHz) stimuli were harmonic complex tones (HC), click trains (CT), and regular interval noise (RIN). Trials consisted of noise-regular-noise (NRN) or regular-noise-regular (RNR) segments in which the repetition rate (or fundamental frequency F0) was either above (250 Hz) or below (20 Hz) the lower limit of pitch. Neural activation was estimated and compared at the senor and source levels. The pitch-relevant regular stimuli (F0 = 250 Hz) were all associated with marked evoked responses at around 140 ms after noise-to-regular transitions at both sensor and source levels. In particular, greater evoked responses to pitch-relevant stimuli than pitch-irrelevant stimuli (F0 = 20 Hz) were localized along the Heschl's sulcus around 140 ms. The regularity-onset responses for RIN were much weaker than for the other types of regular stimuli (HC, CT). This effect was localized over planum temporale, planum polare, and lateral Heschl's gyrus. Importantly, the effect of pitch did not interact with the stimulus type. That is, we did not find evidence to support different responses for different types of regular stimuli from the spatiotemporal cluster of the pitch effect (∼140 ms). The current data demonstrate cortical sensitivity to temporal regularity relevant to pitch that is consistently present across different pitch-relevant stimuli in the Heschl's sulcus between Heschl's gyrus and planum temporale, both of which have been identified as a "pitch center" based on different modalities.


Parallel pitch processing in speech and melody: A study of the interference of musical melody on lexical pitch perception in speakers of Mandarin.

  • Makiko Sadakata‎ et al.
  • PloS one‎
  • 2020‎

Music and language have long been considered two distinct cognitive faculties governed by domain-specific cognitive and neural mechanisms. Recent work into the domain-specificity of pitch processing in both domains appears to suggest pitch processing to be governed by shared neural mechanisms. The current study aimed to explore the domain-specificity of pitch processing by simultaneously presenting pitch contours in speech and music to speakers of a tonal language, and measuring behavioral response and event-related potentials (ERPs). Native speakers of Mandarin were exposed to concurrent pitch contours in melody and speech. Contours in melody emulated those in speech were either congruent or incongruent with the pitch contour of the lexical tone (i.e., rising or falling). Component magnitudes of the N2b and N400 were used as indices of lexical processing. We found that the N2b was modulated by melodic pitch; incongruent item evoked significantly stronger amplitude. There was a trend of N400 to be modulated in the same way. Interestingly, these effects were present only on rising tones. Amplitude and time-course of the N2b and N400 may suggest an interference of melodic pitch contours with both early and late stages of phonological and semantic processing.


Age-related changes to vestibular heave and pitch perception and associations with postural control.

  • Grace A Gabriel‎ et al.
  • Scientific reports‎
  • 2022‎

Falls are a common cause of injury in older adults (OAs), and age-related declines across the sensory systems are associated with increased falls risk. The vestibular system is particularly important for maintaining balance and supporting safe mobility, and aging has been associated with declines in vestibular end-organ functioning. However, few studies have examined potential age-related differences in vestibular perceptual sensitivities or their association with postural stability. Here we used an adaptive-staircase procedure to measure detection and discrimination thresholds in 19 healthy OAs and 18 healthy younger adults (YAs), by presenting participants with passive heave (linear up-and-down translations) and pitch (forward-backward tilt rotations) movements on a motion-platform in the dark. We also examined participants' postural stability under various standing-balance conditions. Associations among these postural measures and vestibular perceptual thresholds were further examined. Ultimately, OAs showed larger heave and pitch detection thresholds compared to YAs, and larger perceptual thresholds were associated with greater postural sway, but only in OAs. Overall, these results suggest that vestibular perceptual sensitivity declines with older age and that such declines are associated with poorer postural stability. Future studies could consider the potential applicability of these results in the development of screening tools for falls prevention in OAs.


A new approach to measuring absolute pitch on a psychometric theory of isolated pitch perception: Is it disentangling specific groups or capturing a continuous ability?

  • Nayana Di Giuseppe Germano‎ et al.
  • PloS one‎
  • 2021‎

Absolute Pitch (AP) is commonly defined as a rare ability that allows an individual to identify any pitch by name. Most researchers use classificatory tests for AP which tracks the number of isolated correct answers. However, each researcher chooses their own procedure for what should be considered correct or incorrect in measuring this ability. Consequently, it is impossible to evaluate comparatively how the stimuli and criteria classify individuals in the same way. We thus adopted a psychometric perspective, approaching AP as a latent trait. Via the Latent Variable Model, we evaluated the consistency and validity for a measure to test for AP ability. A total of 783 undergraduate music students participated in the test. The test battery comprised 10 isolated pitches. All collected data were analyzed with two different rating criteria (perfect and imperfect) under three Latent Variable Model approaches: continuous (Item Response Theory with two and three parameters), categorical (Latent Class Analysis), and the Hybrid model. According to model fit information indices, the perfect approach (only exact pitch responses as correct) measurement model had a better fit under the trait (continuous) specification. This contradicts the usual assumption of a division between AP and non-AP possessors. Alternatively, the categorical solution for the two classes demonstrated the best solution for the imperfect approach (exact pitch responses and semitone deviations considered as correct).


Effects of stimulus duration and vowel quality in cross-linguistic categorical perception of pitch directions.

  • Si Chen‎ et al.
  • PloS one‎
  • 2017‎

We investigated categorical perception of rising and falling pitch contours by tonal and non-tonal listeners. Specifically, we determined minimum durations needed to perceive both contours and compared to those of production, how stimuli duration affects their perception, whether there is an intrinsic F0 effect, and how first language background, duration, directions of pitch and vowel quality interact with each other. Continua of fundamental frequency on different vowels with 9 duration values were created for identification and discrimination tasks. Less time is generally needed to effectively perceive a pitch direction than to produce it. Overall, tonal listeners' perception is more categorical than non-tonal listeners. Stimuli duration plays a critical role for both groups, but tonal listeners showed a stronger duration effect, and may benefit more from the extra time in longer stimuli for context-coding, consistent with the multistore model of categorical perception. Within a certain range of semitones, tonal listeners also required shorter stimulus duration to perceive pitch direction changes than non-tonal listeners. Finally, vowel quality plays a limited role and only interacts with duration in perceiving falling pitch directions. These findings further our understanding on models of categorical perception, the relationship between speech perception and production, and the interaction between the perception of tones and vowel quality.


Deep neural network models reveal interplay of peripheral coding and stimulus statistics in pitch perception.

  • Mark R Saddler‎ et al.
  • Nature communications‎
  • 2021‎

Perception is thought to be shaped by the environments for which organisms are optimized. These influences are difficult to test in biological organisms but may be revealed by machine perceptual systems optimized under different conditions. We investigated environmental and physiological influences on pitch perception, whose properties are commonly linked to peripheral neural coding limits. We first trained artificial neural networks to estimate fundamental frequency from biologically faithful cochlear representations of natural sounds. The best-performing networks replicated many characteristics of human pitch judgments. To probe the origins of these characteristics, we then optimized networks given altered cochleae or sound statistics. Human-like behavior emerged only when cochleae had high temporal fidelity and when models were optimized for naturalistic sounds. The results suggest pitch perception is critically shaped by the constraints of natural environments in addition to those of the cochlea, illustrating the use of artificial neural networks to reveal underpinnings of behavior.


The Effect of Phantom Stimulation and Pseudomonophasic Pulse Shapes on Pitch Perception by Cochlear Implant Listeners.

  • Wiebke Lamping‎ et al.
  • Journal of the Association for Research in Otolaryngology : JARO‎
  • 2020‎

It has been suggested that a specialized high-temporal-acuity brainstem pathway can be activated by stimulating more apically in the cochlea than is achieved by cochlear implants (CIs) when programmed with contemporary clinical settings. We performed multiple experiments to test the effect on pitch perception of phantom stimulation and asymmetric current pulses, both supposedly stimulating beyond the most apical electrode of a CI. The two stimulus types were generated using a bipolar electrode pair, composed of the most apical electrode of the array and a neighboring, more basal electrode. Experiment 1 used a pitch-ranking procedure where neural excitation was shifted apically or basally using so-called phantom stimulation. No benefit of apical phantom stimulation was found on the highest rate up to which pitch ranks increased (upper limit), nor on the slopes of the pitch-ranking function above 300 pulses per second (pps). Experiment 2 used the same procedure to study the effects of apical pseudomonophasic pulses, where the locus of excitation was manipulated by changing stimulus polarity. A benefit of apical stimulation was obtained for the slopes above 300 pps. Experiment 3 used an adaptive rate discrimination procedure and found a small but significant benefit of both types of apical stimulation. Overall, the results show some benefit for apical stimulation on temporal pitch processing at high pulse rates but reveal that the effect is smaller and more variable across listeners than suggested by previous research. The results also provide some indication that the benefit of apical stimulation may decline over time since implantation.


Diminished large-scale functional brain networks in absolute pitch during the perception of naturalistic music and audiobooks.

  • Christian Brauchli‎ et al.
  • NeuroImage‎
  • 2020‎

Previous studies have reported the effects of absolute pitch (AP) and musical proficiency on the functioning of specific brain regions or distinct subnetworks, but they provided an incomplete account of the effects of AP and musical proficiency on whole-brain networks. In this study, we used EEG to estimate source-space whole-brain functional connectivity in a large sample comprising AP musicians (n ​= ​46), relative pitch (RP) musicians (n ​= ​45), and Non-musicians (n ​= ​34) during resting state, naturalistic music listening, and audiobook listening. First, we assessed the global network density of the participants' functional networks in these conditions. As revealed by cluster-based permutation testing, AP musicians showed a decreased mean degree compared to Non-musicians whereas RP musicians showed an intermediate mean degree not statistically different from Non-musicians or AP-musicians. This effect was present during naturalistic music and audiobook listening, but, crucially, not during resting state. Second, we identified the subnetworks that drove group differences in global network density using the network-based statistic approach. We found that AP musicians showed decreased functional connectivity between major hubs of the default mode network during both music and audiobook listening compared to Non-musicians. Third, we assessed group differences in global network topology while controlling for network density. We did not find evidence for group differences in the clustering coefficient and characteristic path length. Taken together, we found first evidence of diminished whole-brain functional networks in AP musicians during the perception of naturalistic auditory stimuli. These differences might reflect a complex interplay between AP ability, musical proficiency, music processing, and auditory processing per se.


Dichotic pitch activates pitch processing centre in Heschl's gyrus.

  • Sebastian Puschmann‎ et al.
  • NeuroImage‎
  • 2010‎

Although several neuroimaging studies have reported pitch-evoked activations at the lateral end of Heschl's gyrus, it is still under debate whether these findings truly represent activity in relation to the perception of pitch or merely stimulus-related features of pitch-evoking sounds. We investigated this issue in a functional magnetic resonance imaging (fMRI) experiment using pure tones in noise and dichotic pitch sequences, which either contained a melody or a fixed pitch. Dichotic pitch evokes a sensation of pitch only in binaural listening conditions, while the monaural signal cannot be distinguished from random noise. Our data show similar neural activations for both tones in noise and dichotic pitch, which are perceptually similar, but physically different. Pitch-related activation was found at the lateral end of Heschl's gyrus in both hemispheres, providing new evidence for a general involvement of this region in pitch processing. In line with prior studies, we found melody-related activation in Planum temporale and Planum polare, but not in primary auditory areas. These results support the view of a general representation of pitch in auditory cortex, irrespective of the physical attributes of the pitch-evoking sound.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: