Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 1 showing 1 ~ 14 papers out of 14 papers

LaDIVA: A neurocomputational model providing laryngeal motor control for speech acquisition and production.

  • Hasini R Weerathunge‎ et al.
  • PLoS computational biology‎
  • 2022‎

Many voice disorders are the result of intricate neural and/or biomechanical impairments that are poorly understood. The limited knowledge of their etiological and pathophysiological mechanisms hampers effective clinical management. Behavioral studies have been used concurrently with computational models to better understand typical and pathological laryngeal motor control. Thus far, however, a unified computational framework that quantitatively integrates physiologically relevant models of phonation with the neural control of speech has not been developed. Here, we introduce LaDIVA, a novel neurocomputational model with physiologically based laryngeal motor control. We combined the DIVA model (an established neural network model of speech motor control) with the extended body-cover model (a physics-based vocal fold model). The resulting integrated model, LaDIVA, was validated by comparing its model simulations with behavioral responses to perturbations of auditory vocal fundamental frequency (fo) feedback in adults with typical speech. LaDIVA demonstrated capability to simulate different modes of laryngeal motor control, ranging from short-term (i.e., reflexive) and long-term (i.e., adaptive) auditory feedback paradigms, to generating prosodic contours in speech. Simulations showed that LaDIVA's laryngeal motor control displays properties of motor equivalence, i.e., LaDIVA could robustly generate compensatory responses to reflexive vocal fo perturbations with varying initial laryngeal muscle activation levels leading to the same output. The model can also generate prosodic contours for studying laryngeal motor control in running speech. LaDIVA can expand the understanding of the physiology of human phonation to enable, for the first time, the investigation of causal effects of neural motor control in the fine structure of the vocal signal.


The FACTS model of speech motor control: Fusing state estimation and task-based control.

  • Benjamin Parrell‎ et al.
  • PLoS computational biology‎
  • 2019‎

We present a new computational model of speech motor control: the Feedback-Aware Control of Tasks in Speech or FACTS model. FACTS employs a hierarchical state feedback control architecture to control simulated vocal tract and produce intelligible speech. The model includes higher-level control of speech tasks and lower-level control of speech articulators. The task controller is modeled as a dynamical system governing the creation of desired constrictions in the vocal tract, after Task Dynamics. Both the task and articulatory controllers rely on an internal estimate of the current state of the vocal tract to generate motor commands. This estimate is derived, based on efference copy of applied controls, from a forward model that predicts both the next vocal tract state as well as expected auditory and somatosensory feedback. A comparison between predicted feedback and actual feedback is then used to update the internal state prediction. FACTS is able to qualitatively replicate many characteristics of the human speech system: the model is robust to noise in both the sensory and motor pathways, is relatively unaffected by a loss of auditory feedback but is more significantly impacted by the loss of somatosensory feedback, and responds appropriately to externally-imposed alterations of auditory and somatosensory feedback. The model also replicates previously hypothesized trade-offs between reliance on auditory and somatosensory feedback and shows for the first time how this relationship may be mediated by acuity in each sensory domain. These results have important implications for our understanding of the speech motor control system in humans.


Mechanisms of sensorimotor adaptation in a hierarchical state feedback control model of speech.

  • Kwang S Kim‎ et al.
  • PLoS computational biology‎
  • 2023‎

Upon perceiving sensory errors during movements, the human sensorimotor system updates future movements to compensate for the errors, a phenomenon called sensorimotor adaptation. One component of this adaptation is thought to be driven by sensory prediction errors-discrepancies between predicted and actual sensory feedback. However, the mechanisms by which prediction errors drive adaptation remain unclear. Here, auditory prediction error-based mechanisms involved in speech auditory-motor adaptation were examined via the feedback aware control of tasks in speech (FACTS) model. Consistent with theoretical perspectives in both non-speech and speech motor control, the hierarchical architecture of FACTS relies on both the higher-level task (vocal tract constrictions) as well as lower-level articulatory state representations. Importantly, FACTS also computes sensory prediction errors as a part of its state feedback control mechanism, a well-established framework in the field of motor control. We explored potential adaptation mechanisms and found that adaptive behavior was present only when prediction errors updated the articulatory-to-task state transformation. In contrast, designs in which prediction errors updated forward sensory prediction models alone did not generate adaptation. Thus, FACTS demonstrated that 1) prediction errors can drive adaptation through task-level updates, and 2) adaptation is likely driven by updates to task-level control rather than (only) to forward predictive models. Additionally, simulating adaptation with FACTS generated a number of important hypotheses regarding previously reported phenomena such as identifying the source(s) of incomplete adaptation and driving factor(s) for changes in the second formant frequency during adaptation to the first formant perturbation. The proposed model design paves the way for a hierarchical state feedback control framework to be examined in the context of sensorimotor adaptation in both speech and non-speech effector systems.


Spectrally specific temporal analyses of spike-train responses to complex sounds: A unifying framework.

  • Satyabrata Parida‎ et al.
  • PLoS computational biology‎
  • 2021‎

Significant scientific and translational questions remain in auditory neuroscience surrounding the neural correlates of perception. Relating perceptual and neural data collected from humans can be useful; however, human-based neural data are typically limited to evoked far-field responses, which lack anatomical and physiological specificity. Laboratory-controlled preclinical animal models offer the advantage of comparing single-unit and evoked responses from the same animals. This ability provides opportunities to develop invaluable insight into proper interpretations of evoked responses, which benefits both basic-science studies of neural mechanisms and translational applications, e.g., diagnostic development. However, these comparisons have been limited by a disconnect between the types of spectrotemporal analyses used with single-unit spike trains and evoked responses, which results because these response types are fundamentally different (point-process versus continuous-valued signals) even though the responses themselves are related. Here, we describe a unifying framework to study temporal coding of complex sounds that allows spike-train and evoked-response data to be analyzed and compared using the same advanced signal-processing techniques. The framework uses a set of peristimulus-time histograms computed from single-unit spike trains in response to polarity-alternating stimuli to allow advanced spectral analyses of both slow (envelope) and rapid (temporal fine structure) response components. Demonstrated benefits include: (1) novel spectrally specific temporal-coding measures that are less confounded by distortions due to hair-cell transduction, synaptic rectification, and neural stochasticity compared to previous metrics, e.g., the correlogram peak-height, (2) spectrally specific analyses of spike-train modulation coding (magnitude and phase), which can be directly compared to modern perceptually based models of speech intelligibility (e.g., that depend on modulation filter banks), and (3) superior spectral resolution in analyzing the neural representation of nonstationary sounds, such as speech and music. This unifying framework significantly expands the potential of preclinical animal models to advance our understanding of the physiological correlates of perceptual deficits in real-world listening following sensorineural hearing loss.


Neurally Encoding Time for Olfactory Navigation.

  • In Jun Park‎ et al.
  • PLoS computational biology‎
  • 2016‎

Accurately encoding time is one of the fundamental challenges faced by the nervous system in mediating behavior. We recently reported that some animals have a specialized population of rhythmically active neurons in their olfactory organs with the potential to peripherally encode temporal information about odor encounters. If these neurons do indeed encode the timing of odor arrivals, it should be possible to demonstrate that this capacity has some functional significance. Here we show how this sensory input can profoundly influence an animal's ability to locate the source of odor cues in realistic turbulent environments-a common task faced by species that rely on olfactory cues for navigation. Using detailed data from a turbulent plume created in the laboratory, we reconstruct the spatiotemporal behavior of a real odor field. We use recurrence theory to show that information about position relative to the source of the odor plume is embedded in the timing between odor pulses. Then, using a parameterized computational model, we show how an animal can use populations of rhythmically active neurons to capture and encode this temporal information in real time, and use it to efficiently navigate to an odor source. Our results demonstrate that the capacity to accurately encode temporal information about sensory cues may be crucial for efficient olfactory navigation. More generally, our results suggest a mechanism for extracting and encoding temporal information from the sensory environment that could have broad utility for neural information processing.


The what and where of adding channel noise to the Hodgkin-Huxley equations.

  • Joshua H Goldwyn‎ et al.
  • PLoS computational biology‎
  • 2011‎

Conductance-based equations for electrically active cells form one of the most widely studied mathematical frameworks in computational biology. This framework, as expressed through a set of differential equations by Hodgkin and Huxley, synthesizes the impact of ionic currents on a cell's voltage--and the highly nonlinear impact of that voltage back on the currents themselves--into the rapid push and pull of the action potential. Later studies confirmed that these cellular dynamics are orchestrated by individual ion channels, whose conformational changes regulate the conductance of each ionic current. Thus, kinetic equations familiar from physical chemistry are the natural setting for describing conductances; for small-to-moderate numbers of channels, these will predict fluctuations in conductances and stochasticity in the resulting action potentials. At first glance, the kinetic equations provide a far more complex (and higher-dimensional) description than the original Hodgkin-Huxley equations or their counterparts. This has prompted more than a decade of efforts to capture channel fluctuations with noise terms added to the equations of Hodgkin-Huxley type. Many of these approaches, while intuitively appealing, produce quantitative errors when compared to kinetic equations; others, as only very recently demonstrated, are both accurate and relatively simple. We review what works, what doesn't, and why, seeking to build a bridge to well-established results for the deterministic equations of Hodgkin-Huxley type as well as to more modern models of ion channel dynamics. As such, we hope that this review will speed emerging studies of how channel noise modulates electrophysiological dynamics and function. We supply user-friendly MATLAB simulation code of these stochastic versions of the Hodgkin-Huxley equations on the ModelDB website (accession number 138950) and http://www.amath.washington.edu/~etsb/tutorials.html.


Human discrimination and modeling of high-frequency complex tones shed light on the neural codes for pitch.

  • Daniel R Guest‎ et al.
  • PLoS computational biology‎
  • 2022‎

Accurate pitch perception of harmonic complex tones is widely believed to rely on temporal fine structure information conveyed by the precise phase-locked responses of auditory-nerve fibers. However, accurate pitch perception remains possible even when spectrally resolved harmonics are presented at frequencies beyond the putative limits of neural phase locking, and it is unclear whether residual temporal information, or a coarser rate-place code, underlies this ability. We addressed this question by measuring human pitch discrimination at low and high frequencies for harmonic complex tones, presented either in isolation or in the presence of concurrent complex-tone maskers. We found that concurrent complex-tone maskers impaired performance at both low and high frequencies, although the impairment introduced by adding maskers at high frequencies relative to low frequencies differed between the tested masker types. We then combined simulated auditory-nerve responses to our stimuli with ideal-observer analysis to quantify the extent to which performance was limited by peripheral factors. We found that the worsening of both frequency discrimination and F0 discrimination at high frequencies could be well accounted for (in relative terms) by optimal decoding of all available information at the level of the auditory nerve. A Python package is provided to reproduce these results, and to simulate responses to acoustic stimuli from the three previously published models of the human auditory nerve used in our analyses.


Optimality of sparse olfactory representations is not affected by network plasticity.

  • Collins Assisi‎ et al.
  • PLoS computational biology‎
  • 2020‎

The neural representation of a stimulus is repeatedly transformed as it moves from the sensory periphery to deeper layers of the nervous system. Sparsening transformations are thought to increase the separation between similar representations, encode stimuli with great specificity, maximize storage capacity of associative memories, and provide an energy efficient instantiation of information in neural circuits. In the insect olfactory system, odors are initially represented in the periphery as a combinatorial code with relatively simple temporal dynamics. Subsequently, in the antennal lobe this representation is transformed into a dense and complex spatiotemporal activity pattern. Next, in the mushroom body Kenyon cells (KCs), the representation is dramatically sparsened. Finally, in mushroom body output neurons (MBONs), the representation takes on a new dense spatiotemporal format. Here, we develop a computational model to simulate this chain of olfactory processing from the receptor neurons to MBONs. We demonstrate that representations of similar odorants are maximally separated, measured by the distance between the corresponding MBON activity vectors, when KC responses are sparse. Sparseness is maintained across variations in odor concentration by adjusting the feedback inhibition that KCs receive from an inhibitory neuron, the Giant GABAergic neuron. Different odor concentrations require different strength and timing of feedback inhibition for optimal processing. Importantly, as observed in vivo, the KC-MBON synapse is highly plastic, and, therefore, changes in synaptic strength after learning can change the balance of excitation and inhibition, potentially leading to changes in the distance between MBON activity vectors of two odorants for the same level of KC population sparseness. Thus, what is an optimal degree of sparseness before odor learning, could be rendered sub-optimal post learning. Here, we show, however, that synaptic weight changes caused by spike timing dependent plasticity increase the distance between the odor representations from the perspective of MBONs. A level of sparseness that was optimal before learning remains optimal post-learning.


Spectral tuning of adaptation supports coding of sensory context in auditory cortex.

  • Mateo Lopez Espejo‎ et al.
  • PLoS computational biology‎
  • 2019‎

Perception of vocalizations and other behaviorally relevant sounds requires integrating acoustic information over hundreds of milliseconds. Sound-evoked activity in auditory cortex typically has much shorter latency, but the acoustic context, i.e., sound history, can modulate sound evoked activity over longer periods. Contextual effects are attributed to modulatory phenomena, such as stimulus-specific adaption and contrast gain control. However, an encoding model that links context to natural sound processing has yet to be established. We tested whether a model in which spectrally tuned inputs undergo adaptation mimicking short-term synaptic plasticity (STP) can account for contextual effects during natural sound processing. Single-unit activity was recorded from primary auditory cortex of awake ferrets during presentation of noise with natural temporal dynamics and fully natural sounds. Encoding properties were characterized by a standard linear-nonlinear spectro-temporal receptive field (LN) model and variants that incorporated STP-like adaptation. In the adapting models, STP was applied either globally across all input spectral channels or locally to subsets of channels. For most neurons, models incorporating local STP predicted neural activity as well or better than LN and global STP models. The strength of nonlinear adaptation varied across neurons. Within neurons, adaptation was generally stronger for spectral channels with excitatory than inhibitory gain. Neurons showing improved STP model performance also tended to undergo stimulus-specific adaptation, suggesting a common mechanism for these phenomena. When STP models were compared between passive and active behavior conditions, response gain often changed, but average STP parameters were stable. Thus, spectrally and temporally heterogeneous adaptation, subserved by a mechanism with STP-like dynamics, may support representation of the complex spectro-temporal patterns that comprise natural sounds across wide-ranging sensory contexts.


Structural spine plasticity: Learning and forgetting of odor-specific subnetworks in the olfactory bulb.

  • John Hongyu Meng‎ et al.
  • PLoS computational biology‎
  • 2022‎

Learning to discriminate between different sensory stimuli is essential for survival. In rodents, the olfactory bulb, which contributes to odor discrimination via pattern separation, exhibits extensive structural synaptic plasticity involving the formation and removal of synaptic spines, even in adult animals. The network connectivity resulting from this plasticity is still poorly understood. To gain insight into this connectivity we present here a computational model for the structural plasticity of the reciprocal synapses between the dominant population of excitatory principal neurons and inhibitory interneurons. It incorporates the observed modulation of spine stability by odor exposure. The model captures the striking experimental observation that the exposure to odors does not always enhance their discriminability: while training with similar odors enhanced their discriminability, training with dissimilar odors actually reduced the discriminability of the training stimuli. Strikingly, this differential learning does not require the activity-dependence of the spine stability and occurs also in a model with purely random spine dynamics in which the spine density is changed homogeneously, e.g., due to a global signal. However, the experimentally observed odor-specific reduction in the response of principal cells as a result of extended odor exposure and the concurrent disinhibition of a subset of principal cells arise only in the activity-dependent model. Moreover, this model predicts the experimentally testable recovery of odor response through weak but not through strong odor re-exposure and the forgetting of odors via exposure to interfering odors. Combined with the experimental observations, the computational model provides strong support for the prediction that odor exposure leads to the formation of odor-specific subnetworks in the olfactory bulb.


Gain control with A-type potassium current: IA as a switch between divisive and subtractive inhibition.

  • Joshua H Goldwyn‎ et al.
  • PLoS computational biology‎
  • 2018‎

Neurons process and convey information by transforming barrages of synaptic inputs into spiking activity. Synaptic inhibition typically suppresses the output firing activity of a neuron, and is commonly classified as having a subtractive or divisive effect on a neuron's output firing activity. Subtractive inhibition can narrow the range of inputs that evoke spiking activity by eliminating responses to non-preferred inputs. Divisive inhibition is a form of gain control: it modifies firing rates while preserving the range of inputs that evoke firing activity. Since these two "modes" of inhibition have distinct impacts on neural coding, it is important to understand the biophysical mechanisms that distinguish these response profiles. In this study, we use simulations and mathematical analysis of a neuron model to find the specific conditions (parameter sets) for which inhibitory inputs have subtractive or divisive effects. Significantly, we identify a novel role for the A-type Potassium current (IA). In our model, this fast-activating, slowly-inactivating outward current acts as a switch between subtractive and divisive inhibition. In particular, if IA is strong (large maximal conductance) and fast (activates on a time-scale similar to spike initiation), then inhibition has a subtractive effect on neural firing. In contrast, if IA is weak or insufficiently fast-activating, then inhibition has a divisive effect on neural firing. We explain these findings using dynamical systems methods (plane analysis and fast-slow dissection) to define how a spike threshold condition depends on synaptic inputs and IA. Our findings suggest that neurons can "self-regulate" the gain control effects of inhibition via combinations of synaptic plasticity and/or modulation of the conductance and kinetics of A-type Potassium channels. This novel role for IA would add flexibility to neurons and networks, and may relate to recent observations of divisive inhibitory effects on neurons in the nucleus of the solitary tract.


The Essential Complexity of Auditory Receptive Fields.

  • Ivar L Thorson‎ et al.
  • PLoS computational biology‎
  • 2015‎

Encoding properties of sensory neurons are commonly modeled using linear finite impulse response (FIR) filters. For the auditory system, the FIR filter is instantiated in the spectro-temporal receptive field (STRF), often in the framework of the generalized linear model. Despite widespread use of the FIR STRF, numerous formulations for linear filters are possible that require many fewer parameters, potentially permitting more efficient and accurate model estimates. To explore these alternative STRF architectures, we recorded single-unit neural activity from auditory cortex of awake ferrets during presentation of natural sound stimuli. We compared performance of > 1000 linear STRF architectures, evaluating their ability to predict neural responses to a novel natural stimulus. Many were able to outperform the FIR filter. Two basic constraints on the architecture lead to the improved performance: (1) factorization of the STRF matrix into a small number of spectral and temporal filters and (2) low-dimensional parameterization of the factorized filters. The best parameterized model was able to outperform the full FIR filter in both primary and secondary auditory cortex, despite requiring fewer than 30 parameters, about 10% of the number required by the FIR filter. After accounting for noise from finite data sampling, these STRFs were able to explain an average of 40% of A1 response variance. The simpler models permitted more straightforward interpretation of sensory tuning properties. They also showed greater benefit from incorporating nonlinear terms, such as short term plasticity, that provide theoretical advances over the linear model. Architectures that minimize parameter count while maintaining maximum predictive power provide insight into the essential degrees of freedom governing auditory cortical function. They also maximize statistical power available for characterizing additional nonlinear properties that limit current auditory models.


Paradoxical phase response of gamma rhythms facilitates their entrainment in heterogeneous networks.

  • Xize Xu‎ et al.
  • PLoS computational biology‎
  • 2021‎

The synchronization of different γ-rhythms arising in different brain areas has been implicated in various cognitive functions. Here, we focus on the effect of the ubiquitous neuronal heterogeneity on the synchronization of ING (interneuronal network gamma) and PING (pyramidal-interneuronal network gamma) rhythms. The synchronization properties of rhythms depends on the response of their collective phase to external input. We therefore determine the macroscopic phase-response curve for finite-amplitude perturbations (fmPRC) of ING- and PING-rhythms in all-to-all coupled networks comprised of linear (IF) or quadratic (QIF) integrate-and-fire neurons. For the QIF networks we complement the direct simulations with the adjoint method to determine the infinitesimal macroscopic PRC (imPRC) within the exact mean-field theory. We show that the intrinsic neuronal heterogeneity can qualitatively modify the fmPRC and the imPRC. Both PRCs can be biphasic and change sign (type II), even though the phase-response curve for the individual neurons is strictly non-negative (type I). Thus, for ING rhythms, say, external inhibition to the inhibitory cells can, in fact, advance the collective oscillation of the network, even though the same inhibition would lead to a delay when applied to uncoupled neurons. This paradoxical advance arises when the external inhibition modifies the internal dynamics of the network by reducing the number of spikes of inhibitory neurons; the advance resulting from this disinhibition outweighs the immediate delay caused by the external inhibition. These results explain how intrinsic heterogeneity allows ING- and PING-rhythms to become synchronized with a periodic forcing or another rhythm for a wider range in the mismatch of their frequencies. Our results identify a potential function of neuronal heterogeneity in the synchronization of coupled γ-rhythms, which may play a role in neural information transfer via communication through coherence.


Generative embedding for model-based classification of fMRI data.

  • Kay H Brodersen‎ et al.
  • PLoS computational biology‎
  • 2011‎

Decoding models, such as those underlying multivariate classification algorithms, have been increasingly used to infer cognitive or clinical brain states from measures of brain activity obtained by functional magnetic resonance imaging (fMRI). The practicality of current classifiers, however, is restricted by two major challenges. First, due to the high data dimensionality and low sample size, algorithms struggle to separate informative from uninformative features, resulting in poor generalization performance. Second, popular discriminative methods such as support vector machines (SVMs) rarely afford mechanistic interpretability. In this paper, we address these issues by proposing a novel generative-embedding approach that incorporates neurobiologically interpretable generative models into discriminative classifiers. Our approach extends previous work on trial-by-trial classification for electrophysiological recordings to subject-by-subject classification for fMRI and offers two key advantages over conventional methods: it may provide more accurate predictions by exploiting discriminative information encoded in 'hidden' physiological quantities such as synaptic connection strengths; and it affords mechanistic interpretability of clinical classifications. Here, we introduce generative embedding for fMRI using a combination of dynamic causal models (DCMs) and SVMs. We propose a general procedure of DCM-based generative embedding for subject-wise classification, provide a concrete implementation, and suggest good-practice guidelines for unbiased application of generative embedding in the context of fMRI. We illustrate the utility of our approach by a clinical example in which we classify moderately aphasic patients and healthy controls using a DCM of thalamo-temporal regions during speech processing. Generative embedding achieves a near-perfect balanced classification accuracy of 98% and significantly outperforms conventional activation-based and correlation-based methods. This example demonstrates how disease states can be detected with very high accuracy and, at the same time, be interpreted mechanistically in terms of abnormalities in connectivity. We envisage that future applications of generative embedding may provide crucial advances in dissecting spectrum disorders into physiologically more well-defined subgroups.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: