Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 1 showing 1 ~ 20 papers out of 3,350 papers

The over-estimation of distance for self-voice versus other-voice.

  • Wen Wen‎ et al.
  • Scientific reports‎
  • 2022‎

Self-related stimuli are important cues for people to recognize themselves in the external world and hold a special status in our perceptual system. Self-voice plays an important role in daily social communication and is also a frequent input for self-identification. Although many studies have been conducted on the acoustic features of self-voice, no research has ever examined the spatial aspect, although the spatial perception of voice is important for humans. This study proposes a novel perspective for studying self-voice. We investigated people's distance perception of their own voice when the voice was heard from an external position. Participants heard their own voice from one of four speakers located either 90 or 180 cm from their sitting position, either immediately after uttering a short vowel (i.e., active session) or hearing the replay of their own pronunciation (i.e., replay session). They were then asked to indicate which speaker they heard the voice from. Their voices were either pitch-shifted by ± 4 semitones (i.e., other-voice condition) or unaltered (i.e., self-voice condition). The results of spatial judgment showed that self-voice from the closer speakers was misattributed to that from the speakers further away at a significantly higher proportion than other-voice. This phenomenon was also observed when the participants remained silent and heard prerecorded voices. Additional structural equation modeling using participants' schizotypal scores showed that the effect of self-voice on distance perception was significantly associated with the score of delusional thoughts (Peters Delusion Inventory) and distorted body image (Perceptual Aberration Scale) in the active speaking session but not in the replay session. The findings of this study provide important insights for understanding how people process self-related stimuli when there is a small distortion and how this may be linked to the risk of psychosis.


A dataset of histograms of original and fake voice recordings (H-Voice).

  • Dora M Ballesteros‎ et al.
  • Data in brief‎
  • 2020‎

This paper presents H-Voice, a dataset of 6672 histograms of original and fake voice recordings obtained by the Imitation [1,2] and the Deep Voice [3] methods. The dataset is organized into six directories: Training_fake, Training_original, Validation_fake, Validation_original, External_test1, and External_test2. The training directories include 2088 histograms of fake voice recordings and 2020 histograms of original voice recordings. Each validation directory has 864 histograms obtained from fake voice recordings and original voice recordings. Finally, External_test1 has 760 histograms (380 from fake voice recordings obtained by the Imitation method and 380 from original voice recordings), and External_test2 has 76 histograms (72 from fake voice recordings obtained by the Deep Voice method and 4 from original voice recordings). With this dataset, the researchers can train, cross-validate and test classification models using machine learning techniques to identify fake voice recordings.


A visual analog scale for patient-reported voice outcomes: The VAS voice.

  • Matthew R Naunheim‎ et al.
  • Laryngoscope investigative otolaryngology‎
  • 2020‎

Although patient-reported outcome measures (PROMs) can be useful for assessing quality of life, they can be complex and cognitively burdensome. In this study, we prospectively evaluated a simple patient-reported voice assessment measure on a visual analog scale (VAS voice) and compared it with the Voice Handicap Index (VHI-10).


Neurocognitive dynamics of near-threshold voice signal detection and affective voice evaluation.

  • Huw Swanborough‎ et al.
  • Science advances‎
  • 2020‎

Communication and voice signal detection in noisy environments are universal tasks for many species. The fundamental problem of detecting voice signals in noise (VIN) is underinvestigated especially in its temporal dynamic properties. We investigated VIN as a dynamic signal-to-noise ratio (SNR) problem to determine the neurocognitive dynamics of subthreshold evidence accrual and near-threshold voice signal detection. Experiment 1 showed that dynamic VIN, including a varying SNR and subthreshold sensory evidence accrual, is superior to similar conditions with nondynamic SNRs or with acoustically matched sounds. Furthermore, voice signals with affective meaning have a detection advantage during VIN. Experiment 2 demonstrated that VIN is driven by an effective neural integration in an auditory cortical-limbic network at and beyond the near-threshold detection point, which is preceded by activity in subcortical auditory nuclei. This demonstrates the superior recognition advantage of communication signals in dynamic noise contexts, especially when carrying socio-affective meaning.


Active Ingredients of Voice Therapy for Muscle Tension Voice Disorders: A Retrospective Data Audit.

  • Catherine Madill‎ et al.
  • Journal of clinical medicine‎
  • 2021‎

Although voice therapy is the first line treatment for muscle-tension voice disorders (MTVD), no clinical research has investigated the role of specific active ingredients. This study aimed to evaluate the efficacy of active ingredients in the treatment of MTVD. A retrospective review of a clinical voice database was conducted on 68 MTVD patients who were treated using the optimal phonation task (OPT) and sob voice quality (SVQ), as well as two different processes: task variation and negative practice (NP). Mixed-model analysis was performed on auditory-perceptual and acoustic data from voice recordings at baseline and after each technique. Active ingredients were evaluated using effect sizes. Significant overall treatment effects were observed for the treatment program. Effect sizes ranged from 0.34 (post-NP) to 0.387 (post-SVQ) for overall severity ratings. Effect sizes ranged from 0.237 (post-SVQ) to 0.445 (post-NP) for a smoothed cepstral peak prominence measure. The treatment effects did not depend upon the MTVD type (primary or secondary), treating clinicians, nor the number of sessions and days between sessions. Implementation of individual techniques that promote improved voice quality and processes that support learning resulted in improved habitual voice quality. Both voice techniques and processes can be considered as active ingredients in voice therapy.


The neural changes in connectivity of the voice network during voice pitch perturbation.

  • Sabina G Flagmeier‎ et al.
  • Brain and language‎
  • 2014‎

Voice control is critical to communication. To date, studies have used behavioral, electrophysiological and functional data to investigate the neural correlates of voice control using perturbation tasks, but have yet to examine the interactions of these neural regions. The goal of this study was to use structural equation modeling of functional neuroimaging data to examine network properties of voice with and without perturbation. Results showed that the presence of a pitch shift, which was processed as an error in vocalization, altered connections between right STG and left STG. Other regions that revealed differences in connectivity during error detection and correction included bilateral inferior frontal gyrus, and the primary and pre motor cortices. Results indicated that STG plays a critical role in voice control, specifically, during error detection and correction. Additionally, pitch perturbation elicits changes in the voice network that suggest the right hemisphere is critical to pitch modulation.


Neural mechanisms for voice recognition.

  • Attila Andics‎ et al.
  • NeuroImage‎
  • 2010‎

We investigated neural mechanisms that support voice recognition in a training paradigm with fMRI. The same listeners were trained on different weeks to categorize the mid-regions of voice-morph continua as an individual's voice. Stimuli implicitly defined a voice-acoustics space, and training explicitly defined a voice-identity space. The pre-defined centre of the voice category was shifted from the acoustic centre each week in opposite directions, so the same stimuli had different training histories on different tests. Cortical sensitivity to voice similarity appeared over different time-scales and at different representational stages. First, there were short-term adaptation effects: increasing acoustic similarity to the directly preceding stimulus led to haemodynamic response reduction in the middle/posterior STS and in right ventrolateral prefrontal regions. Second, there were longer-term effects: response reduction was found in the orbital/insular cortex for stimuli that were most versus least similar to the acoustic mean of all preceding stimuli, and, in the anterior temporal pole, the deep posterior STS and the amygdala, for stimuli that were most versus least similar to the trained voice-identity category mean. These findings are interpreted as effects of neural sharpening of long-term stored typical acoustic and category-internal values. The analyses also reveal anatomically separable voice representations: one in a voice-acoustics space and one in a voice-identity space. Voice-identity representations flexibly followed the trained identity shift, and listeners with a greater identity effect were more accurate at recognizing familiar voices. Voice recognition is thus supported by neural voice spaces that are organized around flexible 'mean voice' representations.


Effects of a conservative in-patient voice treatment on the voice-related self-concept.

  • Bernhard Lehnert‎ et al.
  • European archives of oto-rhino-laryngology : official journal of the European Federation of Oto-Rhino-Laryngological Societies (EUFOS) : affiliated with the German Society for Oto-Rhino-Laryngology - Head and Neck Surgery‎
  • 2021‎

Observational study to determine if the voice-related self-concept as measured via the Fragebogen zur Erfassung des Stimmlichen Selbstkonzepts FESS (questionnaire for the assessment of the voice self-concept) can be improved through in-patient voice therapy.


Voice Stress Analysis: A New Framework for Voice and Effort in Human Performance.

  • Martine Van Puyvelde‎ et al.
  • Frontiers in psychology‎
  • 2018‎

People rely on speech for communication, both in a personal and professional context, and often under different conditions of physical, cognitive and/or emotional load. Since vocalization is entirely integrated within both our central (CNS) and autonomic nervous system (ANS), a mounting number of studies have examined the relationship between voice output and the impact of stress. In the current paper, we will outline the different stages of voice output, i.e., breathing, phonation and resonance in relation to a neurovisceral integrated perspective on stress and human performance. In reviewing the function of these three stages of voice output, we will give an overview of the voice parameters encountered in studies on voice stress analysis (VSA) and review the impact of the different types of physiological, cognitive and/or emotional load. In the section "Discussion," with regard to physical load, a competition for ventilation processes required to speak and those to meet metabolic demand of exercised muscles is described. With regard to cognitive and emotional load, we will present the "Model for Voice and Effort" (MoVE) that comprises the integration of ongoing top-down and bottom-up activity under different types of load and combined patterns of voice output. In the MoVE, it is proposed that the fundamental frequency (F0) values as well as jitter give insight in bottom-up/arousal activity and the effort a subject is capable to generate but that its range and variance are related to ongoing top-down processes and the amount of control a subject can maintain. Within the MoVE, a key-role is given to the anterior cingulate cortex (ACC) which is known to be involved in both the equilibration between bottom-up arousal and top-down regulation and vocal activity. Moreover, the connectivity between the ACC and the nervus vagus (NV) is underlined as an indication of the importance of respiration. Since respiration is the driving force of both stress and voice production, it is hypothesized to be the missing-link in our understanding of the underlying mechanisms of the dynamic between speech and stress.


Multidimensional voice assessment after Lee Silverman Voice Therapy (LSVT®) in Parkinson's disease.

  • Maria Raffaella Marchese‎ et al.
  • Acta otorhinolaryngologica Italica : organo ufficiale della Societa italiana di otorinolaringologia e chirurgia cervico-facciale‎
  • 2022‎

To investigate the effectiveness of Lee Silvermann Voice Treatment (LSVT®) in improving prosody in patients with Parkinson's disease over medium-term follow-up.


The Sound of Voice: Voice-Based Categorization of Speakers' Sexual Orientation within and across Languages.

  • Simone Sulpizio‎ et al.
  • PloS one‎
  • 2015‎

Empirical research had initially shown that English listeners are able to identify the speakers' sexual orientation based on voice cues alone. However, the accuracy of this voice-based categorization, as well as its generalizability to other languages (language-dependency) and to non-native speakers (language-specificity), has been questioned recently. Consequently, we address these open issues in 5 experiments: First, we tested whether Italian and German listeners are able to correctly identify sexual orientation of same-language male speakers. Then, participants of both nationalities listened to voice samples and rated the sexual orientation of both Italian and German male speakers. We found that listeners were unable to identify the speakers' sexual orientation correctly. However, speakers were consistently categorized as either heterosexual or gay on the basis of how they sounded. Moreover, a similar pattern of results emerged when listeners judged the sexual orientation of speakers of their own and of the foreign language. Overall, this research suggests that voice-based categorization of sexual orientation reflects the listeners' expectations of how gay voices sound rather than being an accurate detector of the speakers' actual sexual identity. Results are discussed with regard to accuracy, acoustic features of voices, language dependency and language specificity.


Self-perception of voice, hearing, and general health in screening for voice changes in older women.

  • Maria Clara Rocha‎ et al.
  • CoDAS‎
  • 2024‎

To verify the association between sociodemographic factors, vocal behavior, morbidities, and self-perception of voice, hearing, and general health in older women with voice disorders.


The influence of voice familiarity and linguistic content on dogs' ability to follow human voice direction.

  • Livia Langner‎ et al.
  • Scientific reports‎
  • 2023‎

Domestic dogs are well-known for their abilities to utilize human referential cues for problem solving, including following the direction of human voice. This study investigated whether dogs can locate hidden food relying only on the direction of human voice and whether familiarity with the speaker (owner/stranger) and the relevance of auditory signal features (ostensive addressing indicating the intent for communication to the receiver; linguistic content) affect performance. N = 35 dogs and their owners participated in four conditions in a two-way object choice task. Dogs were presented with referential auditory cues representing different combinations of three contextual parameters: the (I) 'familiarity with the human informant' (owner vs. stranger), the (II) communicative function of attention getter (ostensive addressing vs. non-ostensive cueing) and the (III) 'tone and content of the auditory cue' (high-pitched/potentially relevant vs. low-pitched/potentially irrelevant). Dogs also participated in a 'standard' pointing condition where a visual cue was provided. Significant differences were observed between conditions regarding correct choices and response latencies, suggesting that dogs' response to auditory signals are influenced by the combination of content and intonation of the message and the identity of the speaker. Dogs made correct choices the most frequently when context-relevant auditory information was provided by their owners and showed less success when auditory signals were coming from the experimenter. Correct choices in the 'Pointing' condition were similar to the experimenter auditory conditions, but less frequent compared to the owner condition with potentially relevant auditory information. This was paralleled by shorter response latencies in the owner condition compared to the experimenter conditions, although the two measures were not related. Subjects' performance in response to the owner- and experimenter-given auditory cues were interrelated, but unrelated to responses to pointing gestures, suggesting that dogs' ability to understand the referential nature of auditory cues and visual gestures partly arise from different socio-cognitive skills.


Temporal voice areas exist in autism spectrum disorder but are dysfunctional for voice identity recognition.

  • Stefanie Schelinski‎ et al.
  • Social cognitive and affective neuroscience‎
  • 2016‎

The ability to recognise the identity of others is a key requirement for successful communication. Brain regions that respond selectively to voices exist in humans from early infancy on. Currently, it is unclear whether dysfunction of these voice-sensitive regions can explain voice identity recognition impairments. Here, we used two independent functional magnetic resonance imaging studies to investigate voice processing in a population that has been reported to have no voice-sensitive regions: autism spectrum disorder (ASD). Our results refute the earlier report that individuals with ASD have no responses in voice-sensitive regions: Passive listening to vocal, compared to non-vocal, sounds elicited typical responses in voice-sensitive regions in the high-functioning ASD group and controls. In contrast, the ASD group had a dysfunction in voice-sensitive regions during voice identity but not speech recognition in the right posterior superior temporal sulcus/gyrus (STS/STG)-a region implicated in processing complex spectrotemporal voice features and unfamiliar voices. The right anterior STS/STG correlated with voice identity recognition performance in controls but not in the ASD group. The findings suggest that right STS/STG dysfunction is critical for explaining voice recognition impairments in high-functioning ASD and show that ASD is not characterised by a general lack of voice-sensitive responses.


Auditory traits of "own voice".

  • Marino Kimura‎ et al.
  • PloS one‎
  • 2018‎

People perceive their recorded voice differently from their actively spoken voice. The uncanny valley theory proposes that as an object approaches humanlike characteristics, there is an increase in the sense of familiarity; however, eventually a point is reached where the object becomes strangely similar and makes us feel uneasy. The feeling of discomfort experienced when people hear their recorded voice may correspond to the floor of the proposed uncanny valley. To overcome the feeling of eeriness of own-voice recordings, previous studies have suggested equalization of the recorded voice with various types of filters, such as step, bandpass, and low-pass, yet the effectiveness of these filters has not been evaluated. To address this, the aim of experiment 1 was to identify what type of voice recording was the most representative of one's own voice. The voice recordings were presented in five different conditions: unadjusted recorded voice, step filtered voice, bandpass filtered voice, low-pass filtered voice, and a voice for which the participants freely adjusted the parameters. We found large individual differences in the most representative own-voice filter. In order to consider roles of sense of agency, experiment 2 investigated if lip-synching would influence the rating of own voice. The result suggested lip-synching did not affect own voice ratings. In experiment 3, based on the assumption that the voices used in previous experiments corresponded to continuous representations of non-own voice to own voice, the existence of an uncanny valley was examined. Familiarity, eeriness, and the sense of own voice were rated. The result did not support the existence of an uncanny valley. Taken together, the experiments led us to the following conclusions: there is no general filter that can represent own voice for everyone, sense of agency has no effect on own voice rating, and the uncanny valley does not exist for own voice, specifically.


Short Implicit Voice Training Affects Listening Effort During a Voice Cue Sensitivity Task With Vocoder-Degraded Speech.

  • Ada Biçer‎ et al.
  • Ear and hearing‎

Understanding speech in real life can be challenging and effortful, such as in multiple-talker listening conditions. Fundamental frequency ( fo ) and vocal-tract length ( vtl ) voice cues can help listeners segregate between talkers, enhancing speech perception in adverse listening conditions. Previous research showed lower sensitivity to fo and vtl voice cues when speech signal was degraded, such as in cochlear implant hearing and vocoder-listening compared to normal hearing, likely contributing to difficulties in understanding speech in adverse listening. Nevertheless, when multiple talkers are present, familiarity with a talker's voice, via training or exposure, could provide a speech intelligibility benefit. In this study, the objective was to assess how an implicit short-term voice training could affect perceptual discrimination of voice cues ( fo+vtl ), measured in sensitivity and listening effort, with or without vocoder degradations.


Checking for voice disorders without clinical intervention: The Greek and global VHI thresholds for voice disordered patients.

  • Dionysios Tafiadis‎ et al.
  • Scientific reports‎
  • 2019‎

Voice disorders often remain undiagnosed. Many self-perceived questionnaires exist for various medical conditions. Here, we used the Greek Voice Handicap Index (VHI) to address the aforementioned problem. Everyone can fill in the VHI questionnaire and rate their symptoms easily. The innovative feature of this research is the global cut-off score calculated for the VHI. Therefore, the VHI is now capable of helping clinicians establish a more customizable treatment plan with the cut-off point identifying patients without normal phonation. For the purpose of finding the global cut-off point, a group of 180 participants was recruited in Greece (90 non-dysphonic participants and 90 with different types of dysphonia). The voice disordered group had higher VHI scores than those of the control group. In contrast to previous studies, we provided and validated for the first time the cut-off points for all VHI domains and, finally, a global cut-off point through ROC and precision-recall analysis in a voice disordered population. In practice, a score higher than the well-estimated global score indicates (without intervention) a possible voice disorder. Nevertheless, if the score is near the threshold, then the patient should definitely follow preventive measures.


Sleep deprivation detected by voice analysis.

  • Etienne Thoret‎ et al.
  • PLoS computational biology‎
  • 2024‎

Sleep deprivation has an ever-increasing impact on individuals and societies. Yet, to date, there is no quick and objective test for sleep deprivation. Here, we used automated acoustic analyses of the voice to detect sleep deprivation. Building on current machine-learning approaches, we focused on interpretability by introducing two novel ideas: the use of a fully generic auditory representation as input feature space, combined with an interpretation technique based on reverse correlation. The auditory representation consisted of a spectro-temporal modulation analysis derived from neurophysiology. The interpretation method aimed to reveal the regions of the auditory representation that supported the classifiers' decisions. Results showed that generic auditory features could be used to detect sleep deprivation successfully, with an accuracy comparable to state-of-the-art speech features. Furthermore, the interpretation revealed two distinct effects of sleep deprivation on the voice: changes in slow temporal modulations related to prosody and changes in spectral features related to voice quality. Importantly, the relative balance of the two effects varied widely across individuals, even though the amount of sleep deprivation was controlled, thus confirming the need to characterize sleep deprivation at the individual level. Moreover, while the prosody factor correlated with subjective sleepiness reports, the voice quality factor did not, consistent with the presence of both explicit and implicit consequences of sleep deprivation. Overall, the findings show that individual effects of sleep deprivation may be observed in vocal biomarkers. Future investigations correlating such markers with objective physiological measures of sleep deprivation could enable "sleep stethoscopes" for the cost-effective diagnosis of the individual effects of sleep deprivation.


Human voice attractiveness processing: Electrophysiological evidence.

  • Hang Zhang‎ et al.
  • Biological psychology‎
  • 2020‎

Voice attractiveness plays a significant role in social interaction and mate choice. However, how listeners perceive attractive voices and whether this process is mandatory, is poorly understood. The current study explores this question using event-related brain potentials. Participants listened to syllables spoken by male and female voices with high or low attractiveness while completing an implicit (voice un-related) tone detection task or explicitly judging voice attractiveness. In both tasks, attractive male voices elicited a larger N1 than unattractive voices. However, an effect of voice attractiveness on the late positive complex (LPC) was only seen in the explicit task but it was present to both same- and opposite-sex voices. Taken together, voice attractiveness processing during early stages appears to be rapid and mandatory and related to mate selection, whereas during later elaborated processing, voice attractiveness is strategic and aesthetics-based, requiring attentional resources.


Voice analysis results in individuals with Alzheimer's disease: How do age and cognitive status affect voice parameters?

  • Mümüne Merve Parlak‎ et al.
  • Brain and behavior‎
  • 2023‎

Reports of acoustic changes in the voice in individuals with Alzheimer's disease (AD) and the relationship of acoustic changes with age and cognitive status are still limited.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: