Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 1 showing 1 ~ 20 papers out of 10,669 papers

Information theory tests critical predictions of plant defense theory for specialized metabolism.

  • Dapeng Li‎ et al.
  • Science advances‎
  • 2020‎

Different plant defense theories have provided important theoretical guidance in explaining patterns in plant specialized metabolism, but their critical predictions remain to be tested. Here, we systematically explored the metabolomes of Nicotiana attenuata, from single plants to populations, as well as of closely related species, using unbiased tandem mass spectrometry (MS/MS) analyses and processed the abundances of compound spectrum-based MS features within an information theory framework to test critical predictions of optimal defense (OD) and moving target (MT) theories. Information components of plant metabolomes were consistent with the OD theory but contradicted the main prediction of the MT theory for herbivory-induced dynamics of metabolome compositions. From micro- to macroevolutionary scales, jasmonate signaling was confirmed as the master determinant of OD, while ethylene signaling provided fine-tuning for herbivore-specific responses annotated via MS/MS molecular networks.


A Tutorial for Information Theory in Neuroscience.

  • Nicholas M Timme‎ et al.
  • eNeuro‎
  • 2018‎

Understanding how neural systems integrate, encode, and compute information is central to understanding brain function. Frequently, data from neuroscience experiments are multivariate, the interactions between the variables are nonlinear, and the landscape of hypothesized or possible interactions between variables is extremely broad. Information theory is well suited to address these types of data, as it possesses multivariate analysis tools, it can be applied to many different types of data, it can capture nonlinear interactions, and it does not require assumptions about the structure of the underlying data (i.e., it is model independent). In this article, we walk through the mathematics of information theory along with common logistical problems associated with data type, data binning, data quantity requirements, bias, and significance testing. Next, we analyze models inspired by canonical neuroscience experiments to improve understanding and demonstrate the strengths of information theory analyses. To facilitate the use of information theory analyses, and an understanding of how these analyses are implemented, we also provide a free MATLAB software package that can be applied to a wide range of data from neuroscience experiments, as well as from other fields of study.


Dissecting landscape art history with information theory.

  • Byunghwee Lee‎ et al.
  • Proceedings of the National Academy of Sciences of the United States of America‎
  • 2020‎

Painting has played a major role in human expression, evolving subject to a complex interplay of representational conventions, social interactions, and a process of historization. From individual qualitative work of art historians emerges a metanarrative that remains difficult to evaluate in its validity regarding emergent macroscopic and underlying microscopic dynamics. The full scope of granular data, the summary statistics, and consequently, also their bias simply lie beyond the cognitive limit of individual qualitative human scholarship. Yet, a more quantitative understanding is still lacking, driven by a lack of data and a persistent dominance of qualitative scholarship in art history. Here, we show that quantitative analyses of creative processes in landscape painting can shed light, provide a systematic verification, and allow for questioning the emerging metanarrative. Using a quasicanonical benchmark dataset of 14,912 landscape paintings, covering a period from the Western renaissance to contemporary art, we systematically analyze the evolution of compositional proportion via a simple yet coherent information-theoretic dissection method that captures iterations of the dominant horizontal and vertical partition directions. Tracing frequency distributions of seemingly preferred compositions across several conceptual dimensions, we find that dominant dissection ratios can serve as a meaningful signature to capture the unique compositional characteristics and systematic evolution of individual artist bodies of work, creation date time spans, and conventional style periods, while concepts of artist nationality remain problematic. Network analyses of individual artists and style periods clarify their rhizomatic confusion while uncovering three distinguished yet nonintuitive supergroups that are meaningfully clustered in time.


Optimal foraging and the information theory of gambling.

  • Roland J Baddeley‎ et al.
  • Journal of the Royal Society, Interface‎
  • 2019‎

At a macroscopic level, part of the ant colony life cycle is simple: a colony collects resources; these resources are converted into more ants, and these ants in turn collect more resources. Because more ants collect more resources, this is a multiplicative process, and the expected logarithm of the amount of resources determines how successful the colony will be in the long run. Over 60 years ago, Kelly showed, using information theoretic techniques, that the rate of growth of resources for such a situation is optimized by a strategy of betting in proportion to the probability of pay-off. Thus, in the case of ants, the fraction of the colony foraging at a given location should be proportional to the probability that resources will be found there, a result widely applied in the mathematics of gambling. This theoretical optimum leads to predictions as to which collective ant movement strategies might have evolved. Here, we show how colony-level optimal foraging behaviour can be achieved by mapping movement to Markov chain Monte Carlo (MCMC) methods, specifically Hamiltonian Monte Carlo (HMC). This can be done by the ants following a (noisy) local measurement of the (logarithm of) resource probability gradient (possibly supplemented with momentum, i.e. a propensity to move in the same direction). This maps the problem of foraging (via the information theory of gambling, stochastic dynamics and techniques employed within Bayesian statistics to efficiently sample from probability distributions) to simple models of ant foraging behaviour. This identification has broad applicability, facilitates the application of information theory approaches to understand movement ecology and unifies insights from existing biomechanical, cognitive, random and optimality movement paradigms. At the cost of requiring ants to obtain (noisy) resource gradient information, we show that this model is both efficient and matches a number of characteristics of real ant exploration.


An information theory framework for dynamic functional domain connectivity.

  • Victor M Vergara‎ et al.
  • Journal of neuroscience methods‎
  • 2017‎

Dynamic functional network connectivity (dFNC) analyzes time evolution of coherent activity in the brain. In this technique dynamic changes are considered for the whole brain. This paper proposes an information theory framework to measure information flowing among subsets of functional networks call functional domains.


A detailed characterization of complex networks using Information Theory.

  • Cristopher G S Freitas‎ et al.
  • Scientific reports‎
  • 2019‎

Understanding the structure and the dynamics of networks is of paramount importance for many scientific fields that rely on network science. Complex network theory provides a variety of features that help in the evaluation of network behavior. However, such analysis can be confusing and misleading as there are many intrinsic properties for each network metric. Alternatively, Information Theory methods have gained the spotlight because of their ability to create a quantitative and robust characterization of such networks. In this work, we use two Information Theory quantifiers, namely Network Entropy and Network Fisher Information Measure, to analyzing those networks. Our approach detects non-trivial characteristics of complex networks such as the transition present in the Watts-Strogatz model from k-ring to random graphs; the phase transition from a disconnected to an almost surely connected network when we increase the linking probability of Erdős-Rényi model; distinct phases of scale-free networks when considering a non-linear preferential attachment, fitness, and aging features alongside the configuration model with a pure power-law degree distribution. Finally, we analyze the numerical results for real networks, contrasting our findings with traditional complex network methods. In conclusion, we present an efficient method that ignites the debate on network characterization.


Information Needs of Breast Cancer Patients: Theory-Generating Meta-Synthesis.

  • Hongru Lu‎ et al.
  • Journal of medical Internet research‎
  • 2020‎

Breast cancer has become one of the most frequently diagnosed carcinomas and the leading cause of cancer deaths. The substantial growth in the number of breast cancer patients has put great pressure on health services. Meanwhile, the information patients need has increased and become more complicated. Therefore, a comprehensive and in-depth understanding of their information needs is urgently needed to improve the quality of health care. However, previous studies related to the information needs of breast cancer patients have focused on different perspectives and have only contributed to individual results. A systematic review and synthesis of breast cancer patients' information needs is critical.


An implementation of integrated information theory in resting-state fMRI.

  • Idan E Nemirovsky‎ et al.
  • Communications biology‎
  • 2023‎

Integrated Information Theory was developed to explain and quantify consciousness, arguing that conscious systems consist of elements that are integrated through their causal properties. This study presents an implementation of Integrated Information Theory 3.0, the latest version of this framework, to functional MRI data. Data were acquired from 17 healthy subjects who underwent sedation with propofol, a short-acting anaesthetic. Using the PyPhi software package, we systematically analyze how Φmax, a measure of integrated information, is modulated by the sedative in different resting-state networks. We compare Φmax to other proposed measures of conscious level, including the previous version of integrated information, Granger causality, and correlation-based functional connectivity. Our results indicate that Φmax presents a variety of sedative-induced behaviours for different networks. Notably, changes to Φmax closely reflect changes to subjects' conscious level in the frontoparietal and dorsal attention networks, which are responsible for higher-order cognitive functions. In conclusion, our findings present important insight into different measures of conscious level that will be useful in future implementations to functional MRI and other forms of neuroimaging.


Discovering pair-wise genetic interactions: an information theory-based approach.

  • Tomasz M Ignac‎ et al.
  • PloS one‎
  • 2014‎

Phenotypic variation, including that which underlies health and disease in humans, results in part from multiple interactions among both genetic variation and environmental factors. While diseases or phenotypes caused by single gene variants can be identified by established association methods and family-based approaches, complex phenotypic traits resulting from multi-gene interactions remain very difficult to characterize. Here we describe a new method based on information theory, and demonstrate how it improves on previous approaches to identifying genetic interactions, including both synthetic and modifier kinds of interactions. We apply our measure, called interaction distance, to previously analyzed data sets of yeast sporulation efficiency, lipid related mouse data and several human disease models to characterize the method. We show how the interaction distance can reveal novel gene interaction candidates in experimental and simulated data sets, and outperforms other measures in several circumstances. The method also allows us to optimize case/control sample composition for clinical studies.


Assessing sustainability in North America's ecosystems using criticality and information theory.

  • Elvia Ramírez-Carrillo‎ et al.
  • PloS one‎
  • 2018‎

Sustainability is a key concept in economic and policy debates. Nevertheless, it is usually treated only in a qualitative way and has eluded quantitative analysis. Here, we propose a sustainability index based on the premise that sustainable systems do not lose or gain Fisher Information over time. We test this approach using time series data from the AmeriFlux network that measures ecosystem respiration, water and energy fluxes in order to elucidate two key sustainability features: ecosystem health and stability. A novel definition of ecosystem health is developed based on the concept of criticality, which implies that if a system's fluctuations are scale invariant then the system is in a balance between robustness and adaptability. We define ecosystem stability by taking an information theory approach that measures its entropy and Fisher information. Analysis of the Ameriflux consortium big data set of ecosystem respiration time series is contrasted with land condition data. In general we find a good agreement between the sustainability index and land condition data. However, we acknowledge that the results are a preliminary test of the approach and further verification will require a multi-signal analysis. For example, high values of the sustainability index for some croplands are counter-intuitive and we interpret these results as ecosystems maintained in artificial health due to continuous human-induced inflows of matter and energy in the form of soil nutrients and control of competition, pests and disease.


Multivariate information theory uncovers synergistic subsystems of the human cerebral cortex.

  • Thomas F Varley‎ et al.
  • Communications biology‎
  • 2023‎

One of the most well-established tools for modeling the brain is the functional connectivity network, which is constructed from pairs of interacting brain regions. While powerful, the network model is limited by the restriction that only pairwise dependencies are considered and potentially higher-order structures are missed. Here, we explore how multivariate information theory reveals higher-order dependencies in the human brain. We begin with a mathematical analysis of the O-information, showing analytically and numerically how it is related to previously established information theoretic measures of complexity. We then apply the O-information to brain data, showing that synergistic subsystems are widespread in the human brain. Highly synergistic subsystems typically sit between canonical functional networks, and may serve an integrative role. We then use simulated annealing to find maximally synergistic subsystems, finding that such systems typically comprise ≈10 brain regions, recruited from multiple canonical brain systems. Though ubiquitous, highly synergistic subsystems are invisible when considering pairwise functional connectivity, suggesting that higher-order dependencies form a kind of shadow structure that has been unrecognized by established network-based analyses. We assert that higher-order interactions in the brain represent an under-explored space that, accessible with tools of multivariate information theory, may offer novel scientific insights.


Applying Shannon's information theory to bacterial and phage genomes and metagenomes.

  • Sajia Akhter‎ et al.
  • Scientific reports‎
  • 2013‎

All sequence data contain inherent information that can be measured by Shannon's uncertainty theory. Such measurement is valuable in evaluating large data sets, such as metagenomic libraries, to prioritize their analysis and annotation, thus saving computational resources. Here, Shannon's index of complete phage and bacterial genomes was examined. The information content of a genome was found to be highly dependent on the genome length, GC content, and sequence word size. In metagenomic sequences, the amount of information correlated with the number of matches found by comparison to sequence databases. A sequence with more information (higher uncertainty) has a higher probability of being significantly similar to other sequences in the database. Measuring uncertainty may be used for rapid screening for sequences with matches in available database, prioritizing computational resources, and indicating which sequences with no known similarities are likely to be important for more detailed analysis.


An Information-Theory-Based Approach for Optimal Model Reduction of Biomolecules.

  • Marco Giulini‎ et al.
  • Journal of chemical theory and computation‎
  • 2020‎

In theoretical modeling of a physical system, a crucial step consists of the identification of those degrees of freedom that enable a synthetic yet informative representation of it. While in some cases this selection can be carried out on the basis of intuition and experience, straightforward discrimination of the important features from the negligible ones is difficult for many complex systems, most notably heteropolymers and large biomolecules. We here present a thermodynamics-based theoretical framework to gauge the effectiveness of a given simplified representation by measuring its information content. We employ this method to identify those reduced descriptions of proteins, in terms of a subset of their atoms, that retain the largest amount of information from the original model; we show that these highly informative representations share common features that are intrinsically related to the biological properties of the proteins under examination, thereby establishing a bridge between protein structure, energetics, and function.


Combining network topology and information theory to construct representative brain networks.

  • Andrea I Luppi‎ et al.
  • Network neuroscience (Cambridge, Mass.)‎
  • 2021‎

Network neuroscience employs graph theory to investigate the human brain as a complex network, and derive generalizable insights about the brain's network properties. However, graph-theoretical results obtained from network construction pipelines that produce idiosyncratic networks may not generalize when alternative pipelines are employed. This issue is especially pressing because a wide variety of network construction pipelines have been employed in the human network neuroscience literature, making comparisons between studies problematic. Here, we investigate how to produce networks that are maximally representative of the broader set of brain networks obtained from the same neuroimaging data. We do so by minimizing an information-theoretic measure of divergence between network topologies, known as the portrait divergence. Based on functional and diffusion MRI data from the Human Connectome Project, we consider anatomical, functional, and multimodal parcellations at three different scales, and 48 distinct ways of defining network edges. We show that the highest representativeness can be obtained by using parcellations in the order of 200 regions and filtering functional networks based on efficiency-cost optimization-though suitable alternatives are also highlighted. Overall, we identify specific node definition and thresholding procedures that neuroscientists can follow in order to derive representative networks from their human neuroimaging data.


Predictability of COVID-19 worldwide lethality using permutation-information theory quantifiers.

  • Leonardo H S Fernandes‎ et al.
  • Results in physics‎
  • 2021‎

This paper examines the predictability of COVID-19 worldwide lethality considering 43 countries. Based on the values inherent to Permutation entropy ( H s ) and Fisher information measure ( F s ), we apply the Shannon-Fisher causality plane (SFCP), which allows us to quantify the disorder an evaluate randomness present in the time series of daily death cases related to COVID-19 in each country. We also use Hs and Fs to rank the COVID-19 lethality in these countries based on the complexity hierarchy. Our results suggest that the most proactive countries implemented measures such as facemasks, social distancing, quarantine, massive population testing, and hygienic (sanitary) orientations to limit the impacts of COVID-19, which implied lower entropy (higher predictability) to the COVID-19 lethality. In contrast, the most reactive countries implementing these measures depicted higher entropy (lower predictability) to the COVID-19 lethality. Given this, our findings shed light that these preventive measures are efficient to combat the COVID-19 lethality.


Using Information Theory to Detect Rogue Taxa and Improve Consensus Trees.

  • Martin R Smith‎
  • Systematic biology‎
  • 2022‎

"Rogue" taxa of uncertain affinity can confound attempts to summarize the results of phylogenetic analyses. Rogues reduce resolution and support values in consensus trees, potentially obscuring strong evidence for relationships between other taxa. Information theory provides a principled means of assessing the congruence between a set of trees and their consensus, allowing rogue taxa to be identified more effectively than when using ad hoc measures of tree quality. A basic implementation of this approach in R recovers reduced consensus trees that are better resolved, more accurate, and more informative than those generated by existing methods. [Consensus trees; information theory; phylogenetic software; Rogue taxa.].


Information theory-based direct causality measure to assess cardiac fibrillation dynamics.

  • Xili Shi‎ et al.
  • Journal of the Royal Society, Interface‎
  • 2023‎

Understanding the mechanism sustaining cardiac fibrillation can facilitate the personalization of treatment. Granger causality analysis can be used to determine the existence of a hierarchical fibrillation mechanism that is more amenable to ablation treatment in cardiac time-series data. Conventional Granger causality based on linear predictability may fail if the assumption is not met or given sparsely sampled, high-dimensional data. More recently developed information theory-based causality measures could potentially provide a more accurate estimate of the nonlinear coupling. However, despite their successful application to linear and nonlinear physical systems, their use is not known in the clinical field. Partial mutual information from mixed embedding (PMIME) was implemented to identify the direct coupling of cardiac electrophysiology signals. We show that PMIME requires less data and is more robust to extrinsic confounding factors. The algorithms were then extended for efficient characterization of fibrillation organization and hierarchy using clinical high-dimensional data. We show that PMIME network measures correlate well with the spatio-temporal organization of fibrillation and demonstrated that hierarchical type of fibrillation and drivers could be identified in a subset of ventricular fibrillation patients, such that regions of high hierarchy are associated with high dominant frequency.


Estimation of direct nonlinear effective connectivity using information theory and multilayer perceptron.

  • Ali Khadem‎ et al.
  • Journal of neuroscience methods‎
  • 2014‎

Despite the variety of effective connectivity measures, few methods can quantify direct nonlinear causal couplings and most of them are not applicable to high-dimensional datasets.


Classification and Verification of Handwritten Signatures with Time Causal Information Theory Quantifiers.

  • Osvaldo A Rosso‎ et al.
  • PloS one‎
  • 2016‎

We present a new approach for handwritten signature classification and verification based on descriptors stemming from time causal information theory. The proposal uses the Shannon entropy, the statistical complexity, and the Fisher information evaluated over the Bandt and Pompe symbolization of the horizontal and vertical coordinates of signatures. These six features are easy and fast to compute, and they are the input to an One-Class Support Vector Machine classifier. The results are better than state-of-the-art online techniques that employ higher-dimensional feature spaces which often require specialized software and hardware. We assess the consistency of our proposal with respect to the size of the training sample, and we also use it to classify the signatures into meaningful groups.


Information Theory as an Experimental Tool for Integrating Disparate Biophysical Signaling Modules.

  • Patrick McMillen‎ et al.
  • International journal of molecular sciences‎
  • 2022‎

There is a growing appreciation in the fields of cell biology and developmental biology that cells collectively process information in time and space. While many powerful molecular tools exist to observe biophysical dynamics, biologists must find ways to quantitatively understand these phenomena at the systems level. Here, we present a guide for the application of well-established information theory metrics to biological datasets and explain these metrics using examples from cell, developmental and regenerative biology. We introduce a novel computational tool named after its intended purpose, calcium imaging, (CAIM) for simple, rigorous application of these metrics to time series datasets. Finally, we use CAIM to study calcium and cytoskeletal actin information flow patterns between Xenopus laevis embryonic animal cap stem cells. The tools that we present here should enable biologists to apply information theory to develop a systems-level understanding of information processing across a diverse array of experimental systems.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: