This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.
Many decisions rely on how we evaluate potential outcomes and estimate their corresponding probabilities of occurrence. Outcome evaluation is subjective because it requires consulting internal preferences and is sensitive to context. In contrast, probability estimation requires extracting statistics from the environment and therefore imposes unique challenges to the decision maker. Here, we show that probability estimation, like outcome evaluation, is subject to context effects that bias probability estimates away from other events present in the same context. However, unlike valuation, these context effects appeared to be scaled by estimated uncertainty, which is largest at intermediate probabilities. Blood-oxygen-level-dependent (BOLD) imaging showed that patterns of multivoxel activity in the dorsal anterior cingulate cortex (dACC), ventromedial prefrontal cortex (VMPFC), and intraparietal sulcus (IPS) predicted individual differences in context effects on probability estimates. These results establish VMPFC as the neurocomputational substrate shared between valuation and probability estimation and highlight the additional involvement of dACC and IPS that can be uniquely attributed to probability estimation. Because probability estimation is a required component of computational accounts from sensory inference to higher cognition, the context effects found here may affect a wide array of cognitive computations.
This technical note describes the construction of posterior probability maps that enable conditional or Bayesian inferences about regionally specific effects in neuroimaging. Posterior probability maps are images of the probability or confidence that an activation exceeds some specified threshold, given the data. Posterior probability maps (PPMs) represent a complementary alternative to statistical parametric maps (SPMs) that are used to make classical inferences. However, a key problem in Bayesian inference is the specification of appropriate priors. This problem can be finessed using empirical Bayes in which prior variances are estimated from the data, under some simple assumptions about their form. Empirical Bayes requires a hierarchical observation model, in which higher levels can be regarded as providing prior constraints on lower levels. In neuroimaging, observations of the same effect over voxels provide a natural, two-level hierarchy that enables an empirical Bayesian approach. In this note we present a brief motivation and the operational details of a simple empirical Bayesian method for computing posterior probability maps. We then compare Bayesian and classical inference through the equivalent PPMs and SPMs testing for the same effect in the same data.
The comprehension of the gene regulatory code in eukaryotes is one of the major challenges of systems biology, and is a requirement for the development of novel therapeutic strategies for multifactorial diseases. Its bi-fold degeneration precludes brute force and statistical approaches based on the genomic sequence alone. Rather, recursive integration of systematic, whole-genome experimental data with advanced statistical regulatory sequence predictions needs to be developed. Such experimental approaches as well as the prediction tools are only starting to become available and increasing numbers of genome sequences and empirical sequence annotations are under continual discovery-driven change. Furthermore, given the complexity of the question, a decade(s) long multi-laboratory effort needs to be envisioned. These constraints need to be considered in the creation of a framework that can pave a road to successful comprehension of the gene regulatory code.
In ecology, as in other research fields, efficient sampling for population estimation often drives sample designs toward unequal probability sampling, such as in stratified sampling. Design based statistical analysis tools are appropriate for seamless integration of sample design into the statistical analysis. However, it is also common and necessary, after a sampling design has been implemented, to use datasets to address questions that, in many cases, were not considered during the sampling design phase. Questions may arise requiring the use of model based statistical tools such as multiple regression, quantile regression, or regression tree analysis. However, such model based tools may require, for ensuring unbiased estimation, data from simple random samples, which can be problematic when analyzing data from unequal probability designs. Despite numerous method specific tools available to properly account for sampling design, too often in the analysis of ecological data, sample design is ignored and consequences are not properly considered. We demonstrate here that violation of this assumption can lead to biased parameter estimates in ecological research. In addition, to the set of tools available for researchers to properly account for sampling design in model based analysis, we introduce inverse probability bootstrapping (IPB). Inverse probability bootstrapping is an easily implemented method for obtaining equal probability re-samples from a probability sample, from which unbiased model based estimates can be made. We demonstrate the potential for bias in model-based analyses that ignore sample inclusion probabilities, and the effectiveness of IPB sampling in eliminating this bias, using both simulated and actual ecological data. For illustration, we considered three model based analysis tools--linear regression, quantile regression, and boosted regression tree analysis. In all models, using both simulated and actual ecological data, we found inferences to be biased, sometimes severely, when sample inclusion probabilities were ignored, while IPB sampling effectively produced unbiased parameter estimates.
We investigated the effects of probability on visual search. Previous work has shown that people can utilize spatial and sequential probability information to improve target detection. We hypothesized that performance improvements from probability information would extend to the efficiency of visual search. Our task was a simple visual search in which the target was always present among a field of distractors, and could take one of two colors. The absolute probability of the target being either color was 0.5; however, the conditional probability-the likelihood of a particular color given a particular combination of two cues-varied from 0.1 to 0.9. We found that participants searched more efficiently for high conditional probability targets and less efficiently for low conditional probability targets, but only when they were explicitly informed of the probability relationship between cues and target color.
How humans efficiently operate in a world with massive amounts of data that need to be processed, stored, and recalled has long been an unsettled question. Our physical and social environment needs to be represented in a structured way, which could be achieved by reducing input to latent variables in the form of probability distributions, as proposed by influential, probabilistic accounts of cognition and perception. However, few studies have investigated the neural processes underlying the brain's potential ability to represent a probability distribution's complex, global features. Here, we presented participants with a sequence of tones that formed a normal or a bimodal distribution. Using a novel, single-trial EEG analysis, we demonstrate a neural response that indexes the likelihood of an item, given previously presented items, and corresponds to the experienced tones' distribution. Our results indicate that the adult human brain can build a representation of the complex, global pattern of a probability distribution and offer a novel tool for an in-depth understanding of related neural mechanics.
When financial firms are undercapitalized, they are vulnerable to external shocks. The natural response to such vulnerability is to reduce leverage, and this can endogenously start a financial crisis. Excessive credit growth, the main cause of financial crises, is reflected in the undercapitalization of the financial sector. Market-based measures of systemic risk such as SRISK, which stands for systemic risk, enable monitoring how such weakness emerges and progresses in real time. In this paper, we develop quantitative estimates of the level of systemic risk in the financial sector that precipitates a financial crisis. Common approaches to reduce leverage correspond to specific scaling of systemic risk measures. In an econometric framework that recognizes financial crises represent left tail events for the economy, we estimate the relationship between SRISK and the financial crisis severity for 23 developed countries. We develop a probability of crisis measure and an SRISK capacity measure based on our estimates. Our analysis highlights the important global externality whereby the risk of a crisis in one country is strongly influenced by the undercapitalization of the rest of the world.
Predicting all-cause mortality risk is challenging and requires extensive medical data. Recently, large-scale proteomics datasets have proven useful for predicting health-related outcomes. Here, we use measurements of levels of 4,684 plasma proteins in 22,913 Icelanders to develop all-cause mortality predictors both for short- and long-term risk. The participants were 18-101 years old with a mean follow up of 13.7 (sd. 4.7) years. During the study period, 7,061 participants died. Our proposed predictor outperformed, in survival prediction, a predictor based on conventional mortality risk factors. We could identify the 5% at highest risk in a group of 60-80 years old, where 88% died within ten years and 5% at the lowest risk where only 1% died. Furthermore, the predicted risk of death correlates with measures of frailty in an independent dataset. Our results show that the plasma proteome can be used to assess general health and estimate the risk of death.
Decompression sickness (DCS), which is caused by inert gas bubbles in tissues, is an injury of concern for scuba divers, compressed air workers, astronauts, and aviators. Case reports for 3322 air and N2-O2 dives, resulting in 190 DCS events, were retrospectively analyzed and the outcomes were scored as (1) serious neurological, (2) cardiopulmonary, (3) mild neurological, (4) pain, (5) lymphatic or skin, and (6) constitutional or nonspecific manifestations. Following standard U.S. Navy medical definitions, the data were grouped into mild-Type I (manifestations 4-6)-and serious-Type II (manifestations 1-3). Additionally, we considered an alternative grouping of mild-Type A (manifestations 3-6)-and serious-Type B (manifestations 1 and 2). The current U.S. Navy guidance allows for a 2% probability of mild DCS and a 0.1% probability of serious DCS. We developed a hierarchical trinomial (3-state) probabilistic DCS model that simultaneously predicts the probability of mild and serious DCS given a dive exposure. Both the Type I/II and Type A/B discriminations of mild and serious DCS resulted in a highly significant (p << 0.01) improvement in trinomial model fit over the binomial (2-state) model. With the Type I/II definition, we found that the predicted probability of 'mild' DCS resulted in a longer allowable bottom time for the same 2% limit. However, for the 0.1% serious DCS limit, we found a vastly decreased allowable bottom dive time for all dive depths. If the Type A/B scoring was assigned to outcome severity, the no decompression limits (NDL) for air dives were still controlled by the acceptable serious DCS risk limit rather than the acceptable mild DCS risk limit. However, in this case, longer NDL limits were allowed than with the Type I/II scoring. The trinomial model mild and serious probabilities agree reasonably well with the current air NDL only with the Type A/B scoring and when 0.2% risk of serious DCS is allowed.
Colocalization is a statistical method used in genetics to determine whether the same variant is causal for multiple phenotypes, for example, complex traits and gene expression. It provides stronger mechanistic evidence than shared significance, which can be produced through separate causal variants in linkage disequilibrium. Current colocalization methods require full summary statistics for both traits, limiting their use with the majority of reported GWAS associations (e.g. GWAS Catalog). We propose a new approximation to the popular coloc method that can be applied when limited summary statistics are available. Our method (POint EstiMation of Colocalization, POEMColoc) imputes missing summary statistics for one or both traits using LD structure in a reference panel, and performs colocalization using the imputed summary statistics.
Probability distributions play a central role in mathematical and statistical modelling. The encoding, annotation and exchange of such models could be greatly simplified by a resource providing a common reference for the definition of probability distributions. Although some resources exist, no suitably detailed and complex ontology exists nor any database allowing programmatic access.
The goal of this paper is to investigate the changes of entropy estimates when the amplitude distribution of the time series is equalized using the probability integral transformation. The data we analyzed were with known properties-pseudo-random signals with known distributions, mutually coupled using statistical or deterministic methods that include generators of statistically dependent distributions, linear and non-linear transforms, and deterministic chaos. The signal pairs were coupled using a correlation coefficient ranging from zero to one. The dependence of the signal samples is achieved by moving average filter and non-linear equations. The applied coupling methods are checked using statistical tests for correlation. The changes in signal regularity are checked by a multifractal spectrum. The probability integral transformation is then applied to cardiovascular time series-systolic blood pressure and pulse interval-acquired from the laboratory animals and represented the results of entropy estimations. We derived an expression for the reference value of entropy in the probability integral transformed signals. We also experimentally evaluated the reliability of entropy estimates concerning the matching probabilities.
Recent advances in brain clearing and imaging have made it possible to image entire mammalian brains at sub-micron resolution. These images offer the potential to assemble brain-wide atlases of neuron morphology, but manual neuron reconstruction remains a bottleneck. Several automatic reconstruction algorithms exist, but most focus on single neuron images. In this paper, we present a probabilistic reconstruction method, ViterBrain, which combines a hidden Markov state process that encodes neuron geometry with a random field appearance model of neuron fluorescence. ViterBrain utilizes dynamic programming to compute the global maximizer of what we call the most probable neuron path. We applied our algorithm to imperfect image segmentations, and showed that it can follow axons in the presence of noise or nearby neurons. We also provide an interactive framework where users can trace neurons by fixing start and endpoints. ViterBrain is available in our open-source Python package brainlit.
The neural bases of the so-called Spatial Cueing Effect in a visuo-auditory version of the Central Cue Posneŕs Paradigm (CCPP) are analyzed by means of behavioral patterns (Reaction Times and Errors) and Event-Related Potentials (ERPs), namely the Contingent Negative Variation (CNV), N1, P2a, P2p, P3a, P3b and Negative Slow Wave (NSW). The present version consisted of three types of trial blocks with different validity/invalidity proportions: 50% valid - 50% invalid trials, 68% valid - 32% invalid trials and 86% valid - 14% invalid trials. Thus, ERPs can be analyzed as the proportion of valid trials per block increases. Behavioral (Reaction Times and Incorrect responses) and ERP (lateralized component of CNV, P2a, P3b and NSW) results showed a spatial cueing effect as the proportion of valid trials per block increased. Results suggest a brain activity modulation related to sensory-motor attention and working memory updating, in order to adapt to external unpredictable contingencies.
Semen analysis (SA) poorly predicts male fertility, because it does not assess sperm fertilizing ability. The percentage of capacitated sperm determined by GM1 localization ("Cap-Score™"), differs between cohorts of fertile and potentially infertile men, and retrospectively, between men conceiving or failing to conceive by intrauterine insemination (IUI). Here, we prospectively tested whether Cap-Score can predict male fertility with the outcome being clinical pregnancy within ≤3 IUI cycles. Cap-Score and SA were performed (n = 208) with outcomes initially available for 91 men. Men were predicted to have either low (n = 47) or high (n = 44) chance of generating pregnancy using previously-defined Cap-Score reference ranges. Absolute and cumulative pregnancy rates were reduced in men predicted to have low pregnancy rates versus high ([absolute: 10.6% vs. 29.5%; p = 0.04]; [cumulative: 4.3% vs. 18.2%, 9.9% vs. 29.1%, and 14.0% vs. 32.8% for cycles 1-3; n = 91, 64, and 41; p = 0.02]). Only Cap-Score, not male/female age or SA results, differed significantly between outcome groups. Logistic regression evaluated Cap-Score and SA results relative to the probability of generating pregnancy (PGP) for men who were successful in, or completed, three IUI cycles (n = 57). Cap-Score was significantly related to PGP (p = 0.01). The model fit was then tested with 67 additional patients (n = 124; five clinics); the equation changed minimally, but fit improved (p < 0.001; margin of error: 4%). The Akaike Information Criterion found the best model used Cap-Score as the only predictor. These data show that Cap-Score provides a practical, predictive assessment of male fertility, with applications in assisted reproduction and treatment of male infertility.
Digital signaling enhances robustness of cellular decisions in noisy environments, but it is unclear how digital systems transmit temporal information about a stimulus. To understand how temporal input information is encoded and decoded by the NF-κB system, we studied transcription factor dynamics and gene regulation under dose- and duration-modulated inflammatory inputs. Mathematical modeling predicted and microfluidic single-cell experiments confirmed that integral of the stimulus (or area, concentration × duration) controls the fraction of cells that activate NF-κB in the population. However, stimulus temporal profile determined NF-κB dynamics, cell-to-cell variability, and gene expression phenotype. A sustained, weak stimulation lead to heterogeneous activation and delayed timing that is transmitted to gene expression. In contrast, a transient, strong stimulus with the same area caused rapid and uniform dynamics. These results show that digital NF-κB signaling enables multidimensional control of cellular phenotype via input profile, allowing parallel and independent control of single-cell activation probability and population heterogeneity.
Optimal sensory decision-making requires the combination of uncertain sensory signals with prior expectations. The effect of prior probability is often described as a shift in the decision criterion. Can observers track sudden changes in probability? To answer this question, we used a change-point detection paradigm that is frequently used to examine behavior in changing environments. In a pair of orientation-categorization tasks, we investigated the effects of changing probabilities on decision-making. In both tasks, category probability was updated using a sample-and-hold procedure: probability was held constant for a period of time before jumping to another probability state that was randomly selected from a predetermined set of probability states. We developed an ideal Bayesian change-point detection model in which the observer marginalizes over both the current run length (i.e., time since last change) and the current category probability. We compared this model to various alternative models that correspond to different strategies-from approximately Bayesian to simple heuristics-that the observers may have adopted to update their beliefs about probabilities. While a number of models provided decent fits to the data, model comparison favored a model in which probability is estimated following an exponential averaging model with a bias towards equal priors, consistent with a conservative bias, and a flexible variant of the Bayesian change-point detection model with incorrect beliefs. We interpret the former as a simpler, more biologically plausible explanation suggesting that the mechanism underlying change of decision criterion is a combination of on-line estimation of prior probability and a stable, long-term equal-probability prior, thus operating at two very different timescales.
The human striatum has been implicated in processing reward-related information. More recently, activity in the striatum, particularly the caudate nucleus, has been observed when a contingency between behavior and reward exists, suggesting a role for the caudate in reinforcement-based learning. Using a gambling paradigm, in which affective feedback (reward and punishment) followed simple, random guesses on a trial by trial basis, we sought to investigate the role of the caudate nucleus as reward-related learning progressed. Participants were instructed to make a guess regarding the value of a presented card (if the value of the card was higher or lower than 5). They were told that five different cues would be presented prior to making a guess, and that each cue indicated the probability that the card would be high or low. The goal was to learn the contingencies and maximize the reward attained. Accuracy, as measured by participant's choices, improved throughout the experiment for cues that strongly predicted reward, while no change was observed for unpredictable cues. Event-related fMRI revealed that activity in the caudate nucleus was more robust during the early phases of learning, irrespective of contingencies, suggesting involvement of this region during the initial stages of trial and error learning. Further, the reward feedback signal in the caudate nucleus for well-learned cues decreased as learning progressed, suggesting an evolving adaptation of reward feedback expectancy as a behavior-outcome contingency becomes more predictable.
Auditory selective attention is thought to facilitate listening to the sound of interest (e.g., voice or music) in a noisy environment. One mechanism thought to underlie this ability is suppression of distracting stimuli. However, little is known about its operation or characteristics. We tested whether suppression in auditory selective attention capitalizes on statistical regularities in the environment to facilitate attention. Participants listened to seven-second scenes consisting of several voices speaking sequences of numbers and a distractor, which occurred more (70%) or less (30%) frequently across trials. Participants had to find the voice that was a gender singleton and report whether it was saying even or odd numbers. If suppression is an active component of auditory selective attention, task performance was expected to be better when the more frequent distractor was present. Results across the experiment and three replications revealed significantly shorter RTs when the high-probability distractor was in the scene relative to the low-probability distractor. Results are suggestive of a suppression mechanism that mitigates the detrimental influence of a frequently occurring distracting sound.
Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.
You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.
If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.
Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:
You can save any searches you perform for quick access to later from here.
We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.
If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.
Here are the facets that you can filter your papers by.
From here we'll present any options for the literature, such as exporting your current results.
If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.
Year:
Count: