Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 1 showing 1 ~ 3 papers out of 3 papers

Using auditory reaction time to measure loudness growth in rats.

  • Kelly Radziwon‎ et al.
  • Hearing research‎
  • 2020‎

Previous studies have demonstrated that auditory reaction time (RT) is a reliable surrogate of loudness perception in humans. Reaction time-intensity (RT-I) functions faithfully recapitulate equal loudness contours in humans while being easier to obtain than equal loudness judgments, especially in animals. In humans, loudness estimation not only depends on sound intensity, but on a variety of other acoustic factors. Stimulus duration and bandwidth are known to impact loudness perception. In addition, the presence of background noise mimics loudness recruitment; loudness growth is rapid near threshold, but growth becomes normal at suprathreshold levels. Therefore, to evaluate whether RT-I functions are a reliable measure of loudness growth in rats, we obtained auditory RTs across a range of stimulus intensities, durations, and bandwidths, in both quiet and in the presence of background/masking noise. We found that reaction time patterns across stimulus parameters were repeatable over several months in rats and generally consistent with human loudness perceptual data. Our results provide important building blocks for future animal model studies of loudness perception and loudness perceptual disorders.


Attribute capture underlying the precedence effect in rats.

  • Liangjie Chen‎ et al.
  • Hearing research‎
  • 2021‎

In a reverberant environment, humans with normal hearing can perceptually fuse the soundwave from a source with its reflections off nearby surfaces into a single auditory image, whose location appears to be around the source. This phenomenon is called the precedence effect, which is based on the perceptual capture of the reflected (lagging) sounds' attributes by the direct wave from the source. Using the paradigm of attentional modulation of the prepulse inhibition (PPI) of the startle reflex, with both the prepulse-feature specificity and the perceived-prepulse-location specificity, this study was to examine whether the perceptual attribute capture underlying the precedence effect occurs in rats. One broadband continuous noise was delivered by each of two spatially separated left and right loudspeakers with a 1-ms inter-loudspeaker delay. A silent gap was embedded in one of the two noises as the prepulse stimulus. The results showed that regardless of whether the gap was physically in the leading or lagging noise when the leading noise was either the left or right one, fear conditioning the gap enhanced PPI only when the leading noise was delivered from the loudspeaker that was the leading but not the lagging loudspeaker during the conditioning, indicating that due to the spatial specificity (either left or right) in the attentional enhancement of PPI, the perceived location of the conditioned gap was always on the leading side even though the gap was physically on the lagging side. Thus, rats have the same perceptual ability of attribute capture, thereby experiencing the auditory precedence effect as humans.


Low- and high-frequency cortical brain oscillations reflect dissociable mechanisms of concurrent speech segregation in noise.

  • Anusha Yellamsetty‎ et al.
  • Hearing research‎
  • 2018‎

Parsing simultaneous speech requires listeners use pitch-guided segregation which can be affected by the signal-to-noise ratio (SNR) in the auditory scene. The interaction of these two cues may occur at multiple levels within the cortex. The aims of the current study were to assess the correspondence between oscillatory brain rhythms and determine how listeners exploit pitch and SNR cues to successfully segregate concurrent speech. We recorded electrical brain activity while participants heard double-vowel stimuli whose fundamental frequencies (F0s) differed by zero or four semitones (STs) presented in either clean or noise-degraded (+5 dB SNR) conditions. We found that behavioral identification was more accurate for vowel mixtures with larger pitch separations but F0 benefit interacted with noise. Time-frequency analysis decomposed the EEG into different spectrotemporal frequency bands. Low-frequency (θ, β) responses were elevated when speech did not contain pitch cues (0ST > 4ST) or was noisy, suggesting a correlate of increased listening effort and/or memory demands. Contrastively, γ power increments were observed for changes in both pitch (0ST > 4ST) and SNR (clean > noise), suggesting high-frequency bands carry information related to acoustic features and the quality of speech representations. Brain-behavior associations corroborated these effects; modulations in low-frequency rhythms predicted the speed of listeners' perceptual decisions with higher bands predicting identification accuracy. Results are consistent with the notion that neural oscillations reflect both automatic (pre-perceptual) and controlled (post-perceptual) mechanisms of speech processing that are largely divisible into high- and low-frequency bands of human brain rhythms.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: