Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 1 showing 1 ~ 20 papers out of 1,847 papers

Adaptive auditory brightness perception.

  • Kai Siedenburg‎ et al.
  • Scientific reports‎
  • 2021‎

Perception adapts to the properties of prior stimulation, as illustrated by phenomena such as visual color constancy or speech context effects. In the auditory domain, only little is known about adaptive processes when it comes to the attribute of auditory brightness. Here, we report an experiment that tests whether listeners adapt to spectral colorations imposed on naturalistic music and speech excerpts. Our results indicate consistent contrastive adaptation of auditory brightness judgments on a trial-by-trial basis. The pattern of results suggests that these effects tend to grow with an increase in the duration of the adaptor context but level off after around 8 trials of 2 s duration. A simple model of the response criterion yields a correlation of r = .97 with the measured data and corroborates the notion that brightness perception adapts on timescales that fall in the range of auditory short-term memory. Effects turn out to be similar for spectral filtering based on linear spectral filter slopes and filtering based on a measured transfer function from a commercially available hearing device. Overall, our findings demonstrate the adaptivity of auditory brightness perception under realistic acoustical conditions.


Decoding contextual influences on auditory perception from primary auditory cortex.

  • B Englitz‎ et al.
  • bioRxiv : the preprint server for biology‎
  • 2023‎

Perception can be highly dependent on stimulus context, but whether and how sensory areas encode the context remains uncertain. We used an ambiguous auditory stimulus - a tritone pair - to investigate the neural activity associated with a preceding contextual stimulus that strongly influenced the tritone pair's perception: either as an ascending or a descending step in pitch. We recorded single-unit responses from a population of auditory cortical cells in awake ferrets listening to the tritone pairs preceded by the contextual stimulus. We find that the responses adapt locally to the contextual stimulus, consistent with human MEG recordings from the auditory cortex under the same conditions. Decoding the population responses demonstrates that pitch-change selective cells are able to predict well the context-sensitive percept of the tritone pairs. Conversely, decoding the distances between the pitch representations predicts the opposite of the percept. The various percepts can be readily captured and explained by a neural model of cortical activity based on populations of adapting, pitch and pitch-direction selective cells, aligned with the neurophysiological responses. Together, these decoding and model results suggest that contextual influences on perception may well be already encoded at the level of the primary sensory cortices, reflecting basic neural response properties commonly found in these areas.


McGurk illusion recalibrates subsequent auditory perception.

  • Claudia S Lüttke‎ et al.
  • Scientific reports‎
  • 2016‎

Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of 'ada'. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we asked whether the altered auditory perception due to the McGurk illusion affects subsequent auditory perception, i.e. whether this process of fusion may cause a recalibration of the auditory boundaries between phonemes. Participants categorized auditory and audiovisual speech stimuli as /aba/, /ada/ or /aga/ while activity patterns in their auditory cortices were recorded using fMRI. Interestingly, following a McGurk illusion, an auditory /aba/ was more often misperceived as 'ada'. Furthermore, we observed a neural counterpart of this recalibration in the early auditory cortex. When the auditory input /aba/ was perceived as 'ada', activity patterns bore stronger resemblance to activity patterns elicited by /ada/ sounds than when they were correctly perceived as /aba/. Our results suggest that upon experiencing the McGurk illusion, the brain shifts the neural representation of an /aba/ sound towards /ada/, culminating in a recalibration in perception of subsequent auditory input.


Targeted Cortical Manipulation of Auditory Perception.

  • Sebastian Ceballo‎ et al.
  • Neuron‎
  • 2019‎

Driving perception by direct activation of neural ensembles in cortex is a necessary step for achieving a causal understanding of the neural code for auditory perception and developing central sensory rehabilitation methods. Here, using optogenetic manipulations during an auditory discrimination task in mice, we show that auditory cortex can be short-circuited by coarser pathways for simple sound identification. Yet when the sensory decision becomes more complex, involving temporal integration of information, auditory cortex activity is required for sound discrimination and targeted activation of specific cortical ensembles changes perceptual decisions, as predicted by our readout of the cortical code. Hence, auditory cortex representations contribute to sound discriminations by refining decisions from parallel routes.


Auditory-visual integration during nonconscious perception.

  • April Shi Min Ching‎ et al.
  • Cortex; a journal devoted to the study of the nervous system and behavior‎
  • 2019‎

Our study proposes a test of a key assumption of the most prominent model of consciousness - the global workspace (GWS) model (e.g., Baars, 2002, 2005, 2007; Dehaene & Naccache, 2001; Mudrik, Faivre, & Koch, 2014). This assumption is that multimodal integration requires consciousness; however, few studies have explicitly tested if integration can occur between nonconscious information from different modalities. The proposed study examined whether a classic indicator of multimodal integration - the McGurk effect - can be elicited with subliminal auditory-visual speech stimuli. We used a masked speech priming paradigm developed by Kouider and Dupoux (2005) in conjunction with continuous flash suppression (CFS; Tsuchiya & Koch, 2005), a binocular rivalry technique for presenting video stimuli subliminally. Applying these techniques together, we carried out two experiments in which participants categorised auditory syllable targets which were preceded by subliminal auditory-visual (AV) speech primes. Subliminal AV primes were either illusion-inducing (McGurk) or illusion-neutral (Incongruent) combinations of speech stimuli. In Experiment 1, the categorisation of the syllable target ("pa") was facilitated by the same syllable prime when it was part of a McGurk combination (auditory "pa" and visual "ka") but not when part of an Incongruent combination (auditory "pa" and visual "wa"). This dependency on specific AV combinations indicated a nonconscious AV interaction. Experiment 2 presented a different syllable target ("ta") which matched the predicted illusory outcome of the McGurk combination - here, both the McGurk combination (auditory "pa" and visual "ka") and the Incongruent combination (auditory "ta" and visual "ka") failed to facilitate target categorisation. The combined results of both Experiments demonstrate a type of nonconscious multimodal interaction that is distinct from integration - it allows unimodal information that is compatible for integration (i.e., McGurk combinations) to persist and influence later processes, but does not actually combine and alter that information. As the GWS model does not account for non-integrative multimodal interactions, this places some pressure on such models of consciousness.


Dynamics of auditory cortical activity during behavioural engagement and auditory perception.

  • Ioana Carcea‎ et al.
  • Nature communications‎
  • 2017‎

Behavioural engagement can enhance sensory perception. However, the neuronal mechanisms by which behavioural states affect stimulus perception remain poorly understood. Here we record from single units in auditory cortex of rats performing a self-initiated go/no-go auditory task. Self-initiation transforms cortical tuning curves and bidirectionally modulates stimulus-evoked activity patterns and improves auditory detection and recognition. Trial self-initiation decreases the rate of spontaneous activity in the majority of recorded cells. Optogenetic disruption of cortical activity before and during tone presentation shows that these changes in evoked and spontaneous activity are important for sound perception. Thus, behavioural engagement can prepare cortical circuits for sensory processing by dynamically changing sound representation and by controlling the pattern of spontaneous activity.


EEG Responses to auditory figure-ground perception.

  • Xiaoxuan Guo‎ et al.
  • Hearing research‎
  • 2022‎

Speech-in-noise difficulty is commonly reported among hearing-impaired individuals. Recent work has established generic behavioural measures of sound segregation and grouping that are related to speech-in-noise processing but do not require language. In this study, we assessed potential clinical electroencephalographic (EEG) measures of central auditory grouping (stochastic figure-ground test) and speech-in-noise perception (speech-in-babble test) with and without relevant tasks. Auditory targets were presented within background noise (16 talker-babble or randomly generated pure-tones) in 50% of the trials and composed either a figure (pure-tone frequency chords repeating over time) or speech (English names), while the rest of the trials only had background noise. EEG was recorded while participants were presented with the target stimuli (figure or speech) under different attentional states (relevant task or visual-distractor task). EEG time-domain analysis demonstrated enhanced negative responses during detection of both types of auditory targets within the time window 150-350 ms but only figure detection produced significantly enhanced responses under the distracted condition. Further single-channel analysis showed that simple vertex-to-mastoid acquisition defines a very similar response to more complex arrays based on multiple channels. Evoked-potentials to the generic figure-ground task therefore represent a potential clinical measure of grouping relevant to real-world listening that can be assessed irrespective of language knowledge and expertise even without a relevant task.


Developmental organization of neural dynamics supporting auditory perception.

  • Kazuki Sakakura‎ et al.
  • NeuroImage‎
  • 2022‎

A prominent view of language acquisition involves learning to ignore irrelevant auditory signals through functional reorganization, enabling more efficient processing of relevant information. Yet, few studies have characterized the neural spatiotemporal dynamics supporting rapid detection and subsequent disregard of irrelevant auditory information, in the developing brain. To address this unknown, the present study modeled the developmental acquisition of cost-efficient neural dynamics for auditory processing, using intracranial electrocorticographic responses measured in individuals receiving standard-of-care treatment for drug-resistant, focal epilepsy. We also provided evidence demonstrating the maturation of an anterior-to-posterior functional division within the superior-temporal gyrus (STG), which is known to exist in the adult STG.


Natural ITD statistics predict human auditory spatial perception.

  • Rodrigo Pavão‎ et al.
  • eLife‎
  • 2020‎

A neural code adapted to the statistical structure of sensory cues may optimize perception. We investigated whether interaural time difference (ITD) statistics inherent in natural acoustic scenes are parameters determining spatial discriminability. The natural ITD rate of change across azimuth (ITDrc) and ITD variability over time (ITDv) were combined in a Fisher information statistic to assess the amount of azimuthal information conveyed by this sensory cue. We hypothesized that natural ITD statistics underlie the neural code for ITD and thus influence spatial perception. To test this hypothesis, sounds with invariant statistics were presented to measure human spatial discriminability and spatial novelty detection. Human auditory spatial perception showed correlation with natural ITD statistics, supporting our hypothesis. Further analysis showed that these results are consistent with classic models of ITD coding and can explain the ITD tuning distribution observed in the mammalian brainstem.


Inconsistent Effect of Arousal on Early Auditory Perception.

  • Anna C Bolders‎ et al.
  • Frontiers in psychology‎
  • 2017‎

Mood has been shown to influence cognitive performance. However, little is known about the influence of mood on sensory processing, specifically in the auditory domain. With the current study, we sought to investigate how auditory processing of neutral sounds is affected by the mood state of the listener. This was tested in two experiments by measuring masked-auditory detection thresholds before and after a standard mood-induction procedure. In the first experiment (N = 76), mood was induced by imagining a mood-appropriate event combined with listening to mood inducing music. In the second experiment (N = 80), imagining was combined with affective picture viewing to exclude any possibility of confounding the results by acoustic properties of the music. In both experiments, the thresholds were determined by means of an adaptive staircase tracking method in a two-interval forced-choice task. Masked detection thresholds were compared between participants in four different moods (calm, happy, sad, and anxious), which enabled differentiation of mood effects along the dimensions arousal and pleasure. Results of the two experiments were analyzed both in separate analyses and in a combined analysis. The first experiment showed that, while there was no impact of pleasure level on the masked threshold, lower arousal was associated with lower threshold (higher masked sensitivity). However, as indicated by an interaction effect between experiment and arousal, arousal did have a different effect on the threshold in Experiment 2. Experiment 2 showed a trend of arousal in opposite direction. These results show that the effect of arousal on auditory-masked sensitivity may depend on the modality of the mood-inducing stimuli. As clear conclusions regarding the genuineness of the arousal effect on the masked threshold cannot be drawn, suggestions for further research that could clarify this issue are provided.


A unitary model of auditory frequency change perception.

  • Kai Siedenburg‎ et al.
  • PLoS computational biology‎
  • 2023‎

Changes in the frequency content of sounds over time are arguably the most basic form of information about the behavior of sound-emitting objects. In perceptual studies, such changes have mostly been investigated separately, as aspects of either pitch or timbre. Here, we propose a unitary account of "up" and "down" subjective judgments of frequency change, based on a model combining auditory correlates of acoustic cues in a sound-specific and listener-specific manner. To do so, we introduce a generalized version of so-called Shepard tones, allowing symmetric manipulations of spectral information on a fine scale, usually associated to pitch (spectral fine structure, SFS), and on a coarse scale, usually associated timbre (spectral envelope, SE). In a series of behavioral experiments, listeners reported "up" or "down" shifts across pairs of generalized Shepard tones that differed in SFS, in SE, or in both. We observed the classic properties of Shepard tones for either SFS or SE shifts: subjective judgements followed the smallest log-frequency change direction, with cases of ambiguity and circularity. Interestingly, when both SFS and SE changes were applied concurrently (synergistically or antagonistically), we observed a trade-off between cues. Listeners were encouraged to report when they perceived "both" directions of change concurrently, but this rarely happened, suggesting a unitary percept. A computational model could accurately fit the behavioral data by combining different cues reflecting frequency changes after auditory filtering. The model revealed that cue weighting depended on the nature of the sound. When presented with harmonic sounds, listeners put more weight on SFS-related cues, whereas inharmonic sounds led to more weight on SE-related cues. Moreover, these stimulus-based factors were modulated by inter-individual differences, revealing variability across listeners in the detailed recipe for "up" and "down" judgments. We argue that frequency changes are tracked perceptually via the adaptive combination of a diverse set of cues, in a manner that is in fact similar to the derivation of other basic auditory dimensions such as spatial location.


Phasic boosting of auditory perception by visual emotion.

  • Lenka Selinger‎ et al.
  • Biological psychology‎
  • 2013‎

Emotionally negative stimuli boost perceptual processes. There is little known, however, about the timing of this modulation. The present study aims at elucidating the phasic effects of, emotional processing on auditory processing within subsequent time-windows of visual emotional, processing in humans. We recorded the electroencephalogram (EEG) while participants responded to a, discrimination task of faces with neutral or fearful expressions. A brief complex tone, which subjects, were instructed to ignore, was displayed concomitantly, but with different asynchronies respective to, the image onset. Analyses of the N1 auditory event-related potential (ERP) revealed enhanced brain, responses in presence of fearful faces. Importantly, this effect occurred at picture-tone asynchronies of, 100 and 150ms, but not when these were displayed simultaneously, or at 50ms or 200ms asynchrony. These results confirm the existence of a fast-operating crossmodal effect of visual emotion on auditory, processing, suggesting a phasic variation according to the time-course of emotional processing.


Odors Bias Time Perception in Visual and Auditory Modalities.

  • Zhenzhu Yue‎ et al.
  • Frontiers in psychology‎
  • 2016‎

Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of attentional deployment between the inducers (odors) and emotionally neutral stimuli (visual dots and sound beeps).


A Neural Circuit for Auditory Dominance over Visual Perception.

  • You-Hyang Song‎ et al.
  • Neuron‎
  • 2017‎

When conflicts occur during integration of visual and auditory information, one modality often dominates the other, but the underlying neural circuit mechanism remains unclear. Using auditory-visual discrimination tasks for head-fixed mice, we found that audition dominates vision in a process mediated by interaction between inputs from the primary visual (VC) and auditory (AC) cortices in the posterior parietal cortex (PTLp). Co-activation of the VC and AC suppresses VC-induced PTLp responses, leaving AC-induced responses. Furthermore, parvalbumin-positive (PV+) interneurons in the PTLp mainly receive AC inputs, and muscimol inactivation of the PTLp or optogenetic inhibition of its PV+ neurons abolishes auditory dominance in the resolution of cross-modal sensory conflicts without affecting either sensory perception. Conversely, optogenetic activation of PV+ neurons in the PTLp enhances the auditory dominance. Thus, our results demonstrate that AC input-specific feedforward inhibition of VC inputs in the PTLp is responsible for the auditory dominance during cross-modal integration.


Quantifying the Impact of Auditory Deafferentation on Speech Perception.

  • Jiayue Liu‎ et al.
  • Trends in hearing‎
  • 2024‎

The past decade has seen a wealth of research dedicated to determining which and how morphological changes in the auditory periphery contribute to people experiencing hearing difficulties in noise despite having clinically normal audiometric thresholds in quiet. Evidence from animal studies suggests that cochlear synaptopathy in the inner ear might lead to auditory nerve deafferentation, resulting in impoverished signal transmission to the brain. Here, we quantify the likely perceptual consequences of auditory deafferentation in humans via a physiologically inspired encoding-decoding model. The encoding stage simulates the processing of an acoustic input stimulus (e.g., speech) at the auditory periphery, while the decoding stage is trained to optimally regenerate the input stimulus from the simulated auditory nerve firing data. This allowed us to quantify the effect of different degrees of auditory deafferentation by measuring the extent to which the decoded signal supported the identification of speech in quiet and in noise. In a series of experiments, speech perception thresholds in quiet and in noise increased (worsened) significantly as a function of the degree of auditory deafferentation for modeled deafferentation greater than 90%. Importantly, this effect was significantly stronger in a noisy than in a quiet background. The encoding-decoding model thus captured the hallmark symptom of degraded speech perception in noise together with normal speech perception in quiet. As such, the model might function as a quantitative guide to evaluating the degree of auditory deafferentation in human listeners.


Inferior Auditory Time Perception in Children With Motor Difficulties.

  • Andrew Chang‎ et al.
  • Child development‎
  • 2021‎

Accurate time perception is crucial for hearing (speech, music) and action (walking, catching). Motor brain regions are recruited during auditory time perception. Therefore, the hypothesis was tested that children (age 6-7) at risk for developmental coordination disorder (rDCD), a neurodevelopmental disorder involving motor difficulties, would show nonmotor auditory time perception deficits. Psychophysical tasks confirmed that children with rDCD have poorer duration and rhythm perception than typically developing children (N = 47, d = 0.95-1.01). Electroencephalography showed delayed mismatch negativity or P3a event-related potential latency in response to duration or rhythm deviants, reflecting inefficient brain processing (N = 54, d = 0.71-0.95). These findings are among the first to characterize perceptual timing deficits in DCD, suggesting important theoretical and clinical implications.


Neural correlates of switching from auditory to speech perception.

  • Ghislaine Dehaene-Lambertz‎ et al.
  • NeuroImage‎
  • 2005‎

Many people exposed to sinewave analogues of speech first report hearing them as electronic glissando and, later, when they switch into a 'speech mode', hearing them as syllables. This perceptual switch modifies their discrimination abilities, enhancing perception of differences that cross phonemic boundaries while diminishing perception of differences within phonemic categories. Using high-density evoked potentials and fMRI in a discrimination paradigm, we studied the changes in brain activity that are related to this change in perception. With ERPs, we observed that phonemic coding is faster than acoustic coding: The electrophysiological mismatch response (MMR) occurred earlier for a phonemic change than for an equivalent acoustic change. The MMR topography was also more asymmetric for a phonemic change than for an acoustic change. In fMRI, activations were also significantly asymmetric, favoring the left hemisphere in both perception modes. Furthermore, switching to the speech mode significantly enhanced activation in the posterior parts of the left superior gyrus and sulcus relative to the non-speech mode. When responses to a change of stimulus were studied, a cluster of voxels in the supramarginal gyrus was activated significantly more by a phonemic change than by an acoustic change. These results demonstrate that phoneme perception in adults relies on a specific and highly efficient left-hemispheric network, which can be activated in top-down fashion when processing ambiguous speech/non-speech stimuli.


Hierarchical organization of speech perception in human auditory cortex.

  • Colin Humphries‎ et al.
  • Frontiers in neuroscience‎
  • 2014‎

Human speech consists of a variety of articulated sounds that vary dynamically in spectral composition. We investigated the neural activity associated with the perception of two types of speech segments: (a) the period of rapid spectral transition occurring at the beginning of a stop-consonant vowel (CV) syllable and (b) the subsequent spectral steady-state period occurring during the vowel segment of the syllable. Functional magnetic resonance imaging (fMRI) was recorded while subjects listened to series of synthesized CV syllables and non-phonemic control sounds. Adaptation to specific sound features was measured by varying either the transition or steady-state periods of the synthesized sounds. Two spatially distinct brain areas in the superior temporal cortex were found that were sensitive to either the type of adaptation or the type of stimulus. In a relatively large section of the bilateral dorsal superior temporal gyrus (STG), activity varied as a function of adaptation type regardless of whether the stimuli were phonemic or non-phonemic. Immediately adjacent to this region in a more limited area of the ventral STG, increased activity was observed for phonemic trials compared to non-phonemic trials, however, no adaptation effects were found. In addition, a third area in the bilateral medial superior temporal plane showed increased activity to non-phonemic compared to phonemic sounds. The results suggest a multi-stage hierarchical stream for speech sound processing extending ventrolaterally from the superior temporal plane to the superior temporal sulcus. At successive stages in this hierarchy, neurons code for increasingly more complex spectrotemporal features. At the same time, these representations become more abstracted from the original acoustic form of the sound.


Auditory perception of natural sound categories--an fMRI study.

  • M Sharda‎ et al.
  • Neuroscience‎
  • 2012‎

Despite an extremely rich and complex auditory environment, human beings categorize sounds effortlessly. While it is now well-known that this ability is a result of complex interaction of bottom-up processing of low-level acoustic features and top-down influences like evolutionary relevance, it is yet unclear how these processes drive categorization. The objective of the current study was to use functional neuroimaging to investigate the contribution of these two processes for category selectivity in the cortex. We used a set of ecologically valid sounds that belonged to three different categories: animal vocalizations, environmental sounds and human non-speech sounds, all matched on acoustic structure attributes like harmonic-to-noise ratio to minimize differences in bottom-up processing as well as matched for familiarity to rule out other top-down influences. Participants performed a loudness judgment task in the scanner and data were acquired using a sparse-temporal sampling paradigm. Our functional imaging results show that there is category selectivity in the cortex only for species-specific vocalizations and this is revealed in six clusters in the right and left STG/STS. Category selectivity was not observed for any other category of sounds. Our findings suggest a potential role of evolutionary relevance for cortical processing of sounds. While this seems to be an appealing proposition, further studies are required to explore the role of top-down mechanisms arising from such features to drive category selectivity in the brain.


Evaluation of auditory perception development in neonates by quantitative electroencephalography and auditory event-related potentials.

  • Qinfen Zhang‎ et al.
  • PloS one‎
  • 2017‎

The present study was performed to investigate neonatal auditory perception function by quantitative electroencephalography (QEEG) and auditory event-related potentials (aERPs) and identify the characteristics of auditory perception development in newborns.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: