Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 1 showing 1 ~ 20 papers out of 36 papers

Testing multi-scale processing in the auditory system.

  • Xiangbin Teng‎ et al.
  • Scientific reports‎
  • 2016‎

Natural sounds contain information on multiple timescales, so the auditory system must analyze and integrate acoustic information on those different scales to extract behaviorally relevant information. However, this multi-scale process in the auditory system is not widely investigated in the literature, and existing models of temporal integration are mainly built upon detection or recognition tasks on a single timescale. Here we use a paradigm requiring processing on relatively 'local' and 'global' scales and provide evidence suggesting that the auditory system extracts fine-detail acoustic information using short temporal windows and uses long temporal windows to abstract global acoustic patterns. Behavioral task performance that requires processing fine-detail information does not improve with longer stimulus length, contrary to predictions of previous temporal integration models such as the multiple-looks and the spectro-temporal excitation pattern model. Moreover, the perceptual construction of putatively 'unitary' auditory events requires more than hundreds of milliseconds. These findings support the hypothesis of a dual-scale processing likely implemented in the auditory cortex.


Mechanisms underlying selective neuronal tracking of attended speech at a "cocktail party".

  • Elana M Zion Golumbic‎ et al.
  • Neuron‎
  • 2013‎

The ability to focus on and understand one talker in a noisy social environment is a critical social-cognitive capacity, whose underlying neuronal mechanisms are unclear. We investigated the manner in which speech streams are represented in brain activity and the way that selective attention governs the brain's representation of speech using a "Cocktail Party" paradigm, coupled with direct recordings from the cortical surface in surgical epilepsy patients. We find that brain activity dynamically tracks speech streams using both low-frequency phase and high-frequency amplitude fluctuations and that optimal encoding likely combines the two. In and near low-level auditory cortices, attention "modulates" the representation by enhancing cortical tracking of attended speech streams, but ignored speech remains represented. In higher-order regions, the representation appears to become more "selective," in that there is no detectable tracking of ignored speech. This selectivity itself seems to sharpen as a sentence unfolds.


Endogenous cortical rhythms determine cerebral specialization for speech perception and production.

  • Anne-Lise Giraud‎ et al.
  • Neuron‎
  • 2007‎

Across multiple timescales, acoustic regularities of speech match rhythmic properties of both the auditory and motor systems. Syllabic rate corresponds to natural jaw-associated oscillatory rhythms, and phonemic length could reflect endogenous oscillatory auditory cortical properties. Hemispheric lateralization for speech could result from an asymmetry of cortical tuning, with left and right auditory areas differentially sensitive to spectro-temporal features of speech. Using simultaneous electroencephalographic (EEG) and functional magnetic resonance imaging (fMRI) recordings from humans, we show that spontaneous EEG power variations within the gamma range (phonemic rate) correlate best with left auditory cortical synaptic activity, while fluctuations within the theta range correlate best with that in the right. Power fluctuations in both ranges correlate with activity in the mouth premotor region, indicating coupling between temporal properties of speech perception and production. These data show that endogenous cortical rhythms provide temporal and spatial constraints on the neuronal mechanisms underlying speech perception and production.


Exercise, APOE, and working memory: MEG and behavioral evidence for benefit of exercise in epsilon4 carriers.

  • Sean P Deeny‎ et al.
  • Biological psychology‎
  • 2008‎

Performance on the Sternberg working memory task, and MEG cortical response on a variation of the Sternberg task were examined in middle-aged carriers and non-carriers of the APOE epsilon4 allele. Physical activity was also assessed to examine whether exercise level modifies the relationship between APOE genotype and neurocognitive function. Regression revealed that high physical activity was associated with faster RT in the six- and eight-letter conditions of the Sternberg in epsilon4 carriers, but not in the non-carriers after controlling for age, gender, and education (N=54). Furthermore, the MEG analysis revealed that sedentary epsilon4 carriers exhibited lower right temporal lobe activation on matching probe trials relative to high-active epsilon4 carriers, while physical activity did not distinguish non-carriers (N=23). The M170 peak was identified as a potential marker for pre-clinical decline as epsilon4 carriers exhibited longer M170 latency, and highly physically active participants exhibited greater M170 amplitude to matching probe trials.


Morning brain: real-world neural evidence that high school class times matter.

  • Suzanne Dikker‎ et al.
  • Social cognitive and affective neuroscience‎
  • 2020‎

Researchers, parents and educators consistently observe a stark mismatch between biologically preferred and socially imposed sleep-wake hours in adolescents, fueling debate about high school start times. We contribute neural evidence to this debate with electroencephalogram data collected from high school students during their regular morning, mid-morning and afternoon classes. Overall, student alpha power was lower when class content was taught via videos than through lectures. Students' resting state alpha brain activity decreased as the day progressed, consistent with adolescents being least attentive early in the morning. During the lessons, students showed consistently worse performance and higher alpha power for early morning classes than for mid-morning classes, while afternoon quiz scores and alpha levels varied. Together, our findings demonstrate that both class activity and class time are reflected in adolescents' brain states in a real-world setting, and corroborate educational research suggesting that mid-morning may be the best time to learn.


Modulation change detection in human auditory cortex: Evidence for asymmetric, non-linear edge detection.

  • Seung-Goo Kim‎ et al.
  • The European journal of neuroscience‎
  • 2020‎

Changes in modulation rate are important cues for parsing acoustic signals, such as speech. We parametrically controlled modulation rate via the correlation coefficient (r) of amplitude spectra across fixed frequency channels between adjacent time frames: broadband modulation spectra are biased toward slow modulate rates with increasing r, and vice versa. By concatenating segments with different r, acoustic changes of various directions (e.g., changes from low to high correlation coefficients, that is, random-to-correlated or vice versa) and sizes (e.g., changes from low to high or from medium to high correlation coefficients) can be obtained. Participants listened to sound blocks and detected changes in correlation while MEG was recorded. Evoked responses to changes in correlation demonstrated (a) an asymmetric representation of change direction: random-to-correlated changes produced a prominent evoked field around 180 ms, while correlated-to-random changes evoked an earlier response with peaks at around 70 and 120 ms, whose topographies resemble those of the canonical P50m and N100m responses, respectively, and (b) a highly non-linear representation of correlation structure, whereby even small changes involving segments with a high correlation coefficient were much more salient than relatively large changes that did not involve segments with high correlation coefficients. Induced responses revealed phase tracking in the delta and theta frequency bands for the high correlation stimuli. The results confirm a high sensitivity for low modulation rates in human auditory cortex, both in terms of their representation and their segregation from other modulation rates.


Categorical Rhythms Are Shared between Songbirds and Humans.

  • Tina C Roeske‎ et al.
  • Current biology : CB‎
  • 2020‎

Rhythm is a prominent feature of music. Of the infinite possible ways of organizing events in time, musical rhythms are almost always distributed categorically. Such categories can facilitate the transmission of culture-a feature that songbirds and humans share. We compared rhythms of live performances of music to rhythms of wild thrush nightingale and domestic zebra finch songs. In nightingales, but not in zebra finches, we found universal rhythm categories, with patterns that were surprisingly similar to those of music. Isochronous 1:1 rhythms were similarly common. Interestingly, a bias toward small ratios (around 1:2 to 1:3), which is highly abundant in music, was observed also in thrush nightingale songs. Within that range, however, there was no statistically significant bias toward exact integer ratios (1:2 or 1:3) in the birds. High-ratio rhythms were abundant in the nightingale song and are structurally similar to fusion rhythms (ornaments) in music. In both species, preferred rhythms remained invariant over extended ranges of tempos, indicating natural categories. The number of rhythm categories decreased at higher tempos, with a threshold above which rhythm became highly stereotyped. In thrush nightingales, this threshold occurred at a tempo twice faster than in humans, indicating weaker structural constraints and a remarkable motor proficiency. Together, the results suggest that categorical rhythms reflect similar constraints on learning motor skills across species. The saliency of categorical rhythms across humans and thrush nightingales suggests that they promote, or emerge from, the cultural transmission of learned vocalizations. VIDEO ABSTRACT.


Decoding the Content of Auditory Sensory Memory Across Species.

  • Drew Cappotto‎ et al.
  • Cerebral cortex (New York, N.Y. : 1991)‎
  • 2021‎

In contrast to classical views of working memory (WM) maintenance, recent research investigating activity-silent neural states has demonstrated that persistent neural activity in sensory cortices is not necessary for active maintenance of information in WM. Previous studies in humans have measured putative memory representations indirectly, by decoding memory contents from neural activity evoked by a neutral impulse stimulus. However, it is unclear whether memory contents can also be decoded in different species and attentional conditions. Here, we employ a cross-species approach to test whether auditory memory contents can be decoded from electrophysiological signals recorded in different species. Awake human volunteers (N = 21) were exposed to auditory pure tone and noise burst stimuli during an auditory sensory memory task using electroencephalography. In a closely matching paradigm, anesthetized female rats (N = 5) were exposed to comparable stimuli while neural activity was recorded using electrocorticography from the auditory cortex. In both species, the acoustic frequency could be decoded from neural activity evoked by pure tones as well as neutral frozen noise burst stimuli. This finding demonstrates that memory contents can be decoded in different species and different states using homologous methods, suggesting that the mechanisms of sensory memory encoding are evolutionarily conserved across species.


Temporo-cerebellar connectivity underlies timing constraints in audition.

  • Anika Stockert‎ et al.
  • eLife‎
  • 2021‎

The flexible and efficient adaptation to dynamic, rapid changes in the auditory environment likely involves generating and updating of internal models. Such models arguably exploit connections between the neocortex and the cerebellum, supporting proactive adaptation. Here, we tested whether temporo-cerebellar disconnection is associated with the processing of sound at short timescales. First, we identify lesion-specific deficits for the encoding of short timescale spectro-temporal non-speech and speech properties in patients with left posterior temporal cortex stroke. Second, using lesion-guided probabilistic tractography in healthy participants, we revealed bidirectional temporo-cerebellar connectivity with cerebellar dentate nuclei and crura I/II. These findings support the view that the encoding and modeling of rapidly modulated auditory spectro-temporal properties can rely on a temporo-cerebellar interface. We discuss these findings in view of the conjecture that proactive adaptation to a dynamic environment via internal models is a generalizable principle.


Hierarchically nested networks optimize the analysis of audiovisual speech.

  • Nikos Chalas‎ et al.
  • iScience‎
  • 2023‎

In conversational settings, seeing the speaker's face elicits internal predictions about the upcoming acoustic utterance. Understanding how the listener's cortical dynamics tune to the temporal statistics of audiovisual (AV) speech is thus essential. Using magnetoencephalography, we explored how large-scale frequency-specific dynamics of human brain activity adapt to AV speech delays. First, we show that the amplitude of phase-locked responses parametrically decreases with natural AV speech synchrony, a pattern that is consistent with predictive coding. Second, we show that the temporal statistics of AV speech affect large-scale oscillatory networks at multiple spatial and temporal resolutions. We demonstrate a spatial nestedness of oscillatory networks during the processing of AV speech: these oscillatory hierarchies are such that high-frequency activity (beta, gamma) is contingent on the phase response of low-frequency (delta, theta) networks. Our findings suggest that the endogenous temporal multiplexing of speech processing confers adaptability within the temporal regimes that are essential for speech comprehension.


A perceptual glitch in serial perception generates temporal distortions.

  • Franklenin Sierra‎ et al.
  • Scientific reports‎
  • 2022‎

Precisely estimating event timing is essential for survival, yet temporal distortions are ubiquitous in our daily sensory experience. Here, we tested whether the relative position, duration, and distance in time of two sequentially-organized events-standard S, with constant duration, and comparison C, with duration varying trial-by-trial-are causal factors in generating temporal distortions. We found that temporal distortions emerge when the first event is shorter than the second event. Importantly, a significant interaction suggests that a longer inter-stimulus interval (ISI) helps to counteract such serial distortion effect only when the constant S is in the first position, but not if the unpredictable C is in the first position. These results imply the existence of a perceptual bias in perceiving ordered event durations, mechanistically contributing to distortion in time perception. We simulated our behavioral results with a Bayesian model and replicated the finding that participants disproportionately expand first-position dynamic (unpredictable) short events. Our results clarify the mechanisms generating time distortions by identifying a hitherto unknown duration-dependent encoding inefficiency in human serial temporal perception, something akin to a strong prior that can be overridden for highly predictable sensory events but unfolds for unpredictable ones.


Spontaneous synchronization to speech reveals neural mechanisms facilitating language learning.

  • M Florencia Assaneo‎ et al.
  • Nature neuroscience‎
  • 2019‎

We introduce a deceptively simple behavioral task that robustly identifies two qualitatively different groups within the general population. When presented with an isochronous train of random syllables, some listeners are compelled to align their own concurrent syllable production with the perceived rate, whereas others remain impervious to the external rhythm. Using both neurophysiological and structural imaging approaches, we show group differences with clear consequences for speech processing and language learning. When listening passively to speech, high synchronizers show increased brain-to-stimulus synchronization over frontal areas, and this localized pattern correlates with precise microstructural differences in the white matter pathways connecting frontal to auditory regions. Finally, the data expose a mechanism that underpins performance on an ecologically relevant word-learning task. We suggest that this task will help to better understand and characterize individual performance in speech processing and language learning.


Effects of Part- and Whole-Object Primes on Early MEG Responses to Mooney Faces and Houses.

  • Mara Steinberg Lowe‎ et al.
  • Frontiers in psychology‎
  • 2016‎

Results from neurophysiological experiments suggest that face recognition engages a sensitive mechanism that is reflected in increased amplitude and decreased latency of the MEG M170 response compared to non-face visual targets. Furthermore, whereas recognition of objects (e.g., houses) has been argued to be based on individual features (e.g., door, window), face recognition may depend more on holistic information. Here we analyzed priming effects of component and holistic primes on 20 participants' early MEG responses to two-tone (Mooney) images to determine whether face recognition in this context engages "featural" or "configural" processing. Although visually underspecified, the Mooney images in this study elicited M170 responses that replicate the typical face vs. house effect. However, we found a distinction between holistic vs. component primes that modulated this effect dependent upon compatibility (match) between the prime and target. The facilitatory effect of holistic faces and houses for Mooney faces and houses, respectively, suggests that both Mooney face and house recognition-both low spatial frequency stimuli-are based on holistic information.


Modulation Spectra Capture EEG Responses to Speech Signals and Drive Distinct Temporal Response Functions.

  • Xiangbin Teng‎ et al.
  • eNeuro‎
  • 2021‎

Speech signals have a unique shape of long-term modulation spectrum that is distinct from environmental noise, music, and non-speech vocalizations. Does the human auditory system adapt to the speech long-term modulation spectrum and efficiently extract critical information from speech signals? To answer this question, we tested whether neural responses to speech signals can be captured by specific modulation spectra of non-speech acoustic stimuli. We generated amplitude modulated (AM) noise with the speech modulation spectrum and 1/f modulation spectra of different exponents to imitate temporal dynamics of different natural sounds. We presented these AM stimuli and a 10-min piece of natural speech to 19 human participants undergoing electroencephalography (EEG) recording. We derived temporal response functions (TRFs) to the AM stimuli of different spectrum shapes and found distinct neural dynamics for each type of TRFs. We then used the TRFs of AM stimuli to predict neural responses to the speech signals, and found that (1) the TRFs of AM modulation spectra of exponents 1, 1.5, and 2 preferably captured EEG responses to speech signals in the δ band and (2) the θ neural band of speech neural responses can be captured by the AM stimuli of an exponent of 0.75. Our results suggest that the human auditory system shows specificity to the long-term modulation spectrum and is equipped with characteristic neural algorithms tailored to extract critical acoustic information from speech signals.


The anticipation of events in time.

  • Matthias Grabenhorst‎ et al.
  • Nature communications‎
  • 2019‎

Humans anticipate events signaled by sensory cues. It is commonly assumed that two uncertainty parameters modulate the brain's capacity to predict: the hazard rate (HR) of event probability and the uncertainty in time estimation which increases with elapsed time. We investigate both assumptions by presenting event probability density functions (PDFs) in each of three sensory modalities. We show that perceptual systems use the reciprocal PDF and not the HR to model event probability density. We also demonstrate that temporal uncertainty does not necessarily grow with elapsed time but can also diminish, depending on the event PDF. Previous research identified neuronal activity related to event probability in multiple levels of the cortical hierarchy (sensory (V4), association (LIP), motor and other areas) proposing the HR as an elementary neuronal computation. Our results-consistent across vision, audition, and somatosensation-suggest that the neurobiological implementation of event anticipation is based on a different, simpler and more stable computation than HR: the reciprocal PDF of events in time.


Flexible control of vocal timing in Carollia perspicillata bats enables escape from acoustic interference.

  • Ava Kiai‎ et al.
  • Communications biology‎
  • 2023‎

In natural environments, background noise can degrade the integrity of acoustic signals, posing a problem for animals that rely on their vocalizations for communication and navigation. A simple behavioral strategy to combat acoustic interference would be to restrict call emissions to periods of low-amplitude or no noise. Using audio playback and computational tools for the automated detection of over 2.5 million vocalizations from groups of freely vocalizing bats, we show that bats (Carollia perspicillata) can dynamically adapt the timing of their calls to avoid acoustic jamming in both predictably and unpredictably patterned noise. This study demonstrates that bats spontaneously seek out temporal windows of opportunity for vocalizing in acoustically crowded environments, providing a mechanism for efficient echolocation and communication in cluttered acoustic landscapes.


Word-specific repetition effects revealed by MEG and the implications for lexical access.

  • Diogo Almeida‎ et al.
  • Brain and language‎
  • 2013‎

This magnetoencephalography (MEG) study investigated the early stages of lexical access in reading, with the goal of establishing when initial contact with lexical information takes place. We identified two candidate evoked responses that could reflect this processing stage: the occipitotemporal N170/M170 and the frontocentral P2. Using a repetition priming paradigm in which long and variable lags were used to reduce the predictability of each repetition, we found that (i) repetition of words, but not pseudowords, evoked a differential bilateral frontal response in the 150-250ms window, (ii) a differential repetition N400m effect was observed between words and pseudowords. We argue that this frontal response, an MEG correlate of the P2 identified in ERP studies, reflects early access to long-term memory representations, which we tentatively characterize as being modality-specific.


The tracking of speech envelope in the human cortex.

  • Jan Kubanek‎ et al.
  • PloS one‎
  • 2013‎

Humans are highly adept at processing speech. Recently, it has been shown that slow temporal information in speech (i.e., the envelope of speech) is critical for speech comprehension. Furthermore, it has been found that evoked electric potentials in human cortex are correlated with the speech envelope. However, it has been unclear whether this essential linguistic feature is encoded differentially in specific regions, or whether it is represented throughout the auditory system. To answer this question, we recorded neural data with high temporal resolution directly from the cortex while human subjects listened to a spoken story. We found that the gamma activity in human auditory cortex robustly tracks the speech envelope. The effect is so marked that it is observed during a single presentation of the spoken story to each subject. The effect is stronger in regions situated relatively early in the auditory pathway (belt areas) compared to other regions involved in speech processing, including the superior temporal gyrus (STG) and the posterior inferior frontal gyrus (Broca's region). To further distinguish whether speech envelope is encoded in the auditory system as a phonological (speech-related), or instead as a more general acoustic feature, we also probed the auditory system with a melodic stimulus. We found that belt areas track melody envelope weakly, and as the only region considered. Together, our data provide the first direct electrophysiological evidence that the envelope of speech is robustly tracked in non-primary auditory cortex (belt areas in particular), and suggest that the considered higher-order regions (STG and Broca's region) partake in a more abstract linguistic analysis.


Neural dynamics of phoneme sequences reveal position-invariant code for content and order.

  • Laura Gwilliams‎ et al.
  • Nature communications‎
  • 2022‎

Speech consists of a continuously-varying acoustic signal. Yet human listeners experience it as sequences of discrete speech sounds, which are used to recognise discrete words. To examine how the human brain appropriately sequences the speech signal, we recorded two-hour magnetoencephalograms from 21 participants listening to short narratives. Our analyses show that the brain continuously encodes the three most recently heard speech sounds in parallel, and maintains this information long past its dissipation from the sensory input. Each speech sound representation evolves over time, jointly encoding both its phonetic features and the amount of time elapsed since onset. As a result, this dynamic neural pattern encodes both the relative order and phonetic content of the speech sequence. These representations are active earlier when phonemes are more predictable, and are sustained longer when lexical identity is uncertain. Our results show how phonetic sequences in natural speech are represented at the level of populations of neurons, providing insight into what intermediary representations exist between the sensory input and sub-lexical units. The flexibility in the dynamics of these representations paves the way for further understanding of how such sequences may be used to interface with higher order structure such as lexical identity.


Episodic sequence memory is supported by a theta-gamma phase code.

  • Andrew C Heusser‎ et al.
  • Nature neuroscience‎
  • 2016‎

The meaning we derive from our experiences is not a simple static extraction of the elements but is largely based on the order in which those elements occur. Models propose that sequence encoding is supported by interactions between high- and low-frequency oscillations, such that elements within an experience are represented by neural cell assemblies firing at higher frequencies (gamma) and sequential order is encoded by the specific timing of firing with respect to a lower frequency oscillation (theta). During episodic sequence memory formation in humans, we provide evidence that items in different sequence positions exhibit greater gamma power along distinct phases of a theta oscillation. Furthermore, this segregation is related to successful temporal order memory. Our results provide compelling evidence that memory for order, a core component of an episodic memory, capitalizes on the ubiquitous physiological mechanism of theta-gamma phase-amplitude coupling.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: