This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.
Rapidly evolving neuroimaging techniques are producing unprecedented quantities of digital data at the same time that many research studies are evolving into global, multi-disciplinary collaborations between geographically distributed scientists. While networked computers have made it almost trivial to transmit data across long distances, collecting and analyzing this data requires extensive metadata if the data is to be maximally shared. Though it is typically straightforward to encode text and numerical values into files and send content between different locations, it is often difficult to attach context and implicit assumptions to the content. As the number of and geographic separation between data contributors grows to national and global scales, the heterogeneity of the collected metadata increases and conformance to a single standardization becomes implausible. Neuroimaging data repositories must then not only accumulate data but must also consolidate disparate metadata into an integrated view. In this article, using specific examples from our experiences, we demonstrate how standardization alone cannot achieve full integration of neuroimaging data from multiple heterogeneous sources and why a fundamental change in the architecture of neuroimaging data repositories is needed instead.
Provenance, the description of the history of a set of data, has grown more important with the proliferation of research consortia-related efforts in neuroimaging. Knowledge about the origin and history of an image is crucial for establishing data and results quality; detailed information about how it was processed, including the specific software routines and operating systems that were used, is necessary for proper interpretation, high fidelity replication and re-use. We have drafted a mechanism for describing provenance in a simple and easy to use environment, alleviating the burden of documentation from the user while still providing a rich description of an image's provenance. This combination of ease of use and highly descriptive metadata should greatly facilitate the collection of provenance and subsequent sharing of data.
In this technical note, we describe and validate a topological false discovery rate (FDR) procedure for statistical parametric mapping. This procedure is designed to deal with signal that is continuous and has, in principle, unbounded spatial support. We therefore infer on topological features of the signal, such as the existence of local maxima or peaks above some threshold. Using results from random field theory, we assign a p-value to each maximum in an SPM and identify an adaptive threshold that controls false discovery rate, using the Benjamini and Hochberg (BH) procedure (1995). This provides a natural complement to conventional family wise error (FWE) control on local maxima. We use simulations to contrast these procedures; both in terms of their relative number of discoveries and their spatial accuracy (via the distribution of the Euclidian distance between true and discovered activations). We also assessed two other procedures: cluster-wise and voxel-wise FDR procedures. Our results suggest that (a) FDR control of maxima or peaks is more sensitive than FWE control of peaks with minimal cost in terms of false-positives, (b) voxel-wise FDR is substantially less accurate than topological FWE or FDR control. Finally, we present an illustrative application using an fMRI study of visual attention.
Significant resources around the world have been invested in neuroimaging studies of brain function and disease. Easier access to this large body of work should have profound impact on research in cognitive neuroscience and psychiatry, leading to advances in the diagnosis and treatment of psychiatric and neurological disease. A trend toward increased sharing of neuroimaging data has emerged in recent years. Nevertheless, a number of barriers continue to impede momentum. Many researchers and institutions remain uncertain about how to share data or lack the tools and expertise to participate in data sharing. The use of electronic data capture (EDC) methods for neuroimaging greatly simplifies the task of data collection and has the potential to help standardize many aspects of data sharing. We review here the motivations for sharing neuroimaging data, the current data sharing landscape, and the sociological or technical barriers that still need to be addressed. The INCF Task Force on Neuroimaging Datasharing, in conjunction with several collaborative groups around the world, has started work on several tools to ease and eventually automate the practice of data sharing. It is hoped that such tools will allow researchers to easily share raw, processed, and derived neuroimaging data, with appropriate metadata and provenance records, and will improve the reproducibility of neuroimaging studies. By providing seamless integration of data sharing and analysis tools within a commodity research environment, the Task Force seeks to identify and minimize barriers to data sharing in the field of neuroimaging.
There has been substantial recent growth in the use of non-invasive optical brain imaging in studies of human brain function in health and disease. Near-infrared neuroimaging (NIN) is one of the most promising of these techniques and, although NIN hardware continues to evolve at a rapid pace, software tools supporting optical data acquisition, image processing, statistical modeling, and visualization remain less refined. Python, a modular and computationally efficient development language, can support functional neuroimaging studies of diverse design and implementation. In particular, Python's easily readable syntax and modular architecture allow swift prototyping followed by efficient transition to stable production systems. As an introduction to our ongoing efforts to develop Python software tools for structural and functional neuroimaging, we discuss: (i) the role of non-invasive diffuse optical imaging in measuring brain function, (ii) the key computational requirements to support NIN experiments, (iii) our collection of software tools to support NIN, called NinPy, and (iv) future extensions of these tools that will allow integration of optical with other structural and functional neuroimaging data sources. Source code for the software discussed here will be made available at www.nmr.mgh.harvard.edu/Neural_SystemsGroup/software.html.
We describe a project-based introduction to reproducible and collaborative neuroimaging analysis. Traditional teaching on neuroimaging usually consists of a series of lectures that emphasize the big picture rather than the foundations on which the techniques are based. The lectures are often paired with practical workshops in which students run imaging analyses using the graphical interface of specific neuroimaging software packages. Our experience suggests that this combination leaves the student with a superficial understanding of the underlying ideas, and an informal, inefficient, and inaccurate approach to analysis. To address these problems, we based our course around a substantial open-ended group project. This allowed us to teach: (a) computational tools to ensure computationally reproducible work, such as the Unix command line, structured code, version control, automated testing, and code review and (b) a clear understanding of the statistical techniques used for a basic analysis of a single run in an MR scanner. The emphasis we put on the group project showed the importance of standard computational tools for accuracy, efficiency, and collaboration. The projects were broadly successful in engaging students in working reproducibly on real scientific questions. We propose that a course on this model should be the foundation for future programs in neuroimaging. We believe it will also serve as a model for teaching efficient and reproducible research in other fields of computational science.
Functional neuroimaging has made fundamental contributions to our understanding of brain function. It remains challenging, however, to translate these advances into diagnostic tools for psychiatry. Promising new avenues for translation are provided by computational modeling of neuroimaging data. This article reviews contemporary frameworks for computational neuroimaging, with a focus on forward models linking unobservable brain states to measurements. These approaches-biophysical network models, generative models, and model-based fMRI analyses of neuromodulation-strive to move beyond statistical characterizations and toward mechanistic explanations of neuroimaging data. Focusing on schizophrenia as a paradigmatic spectrum disease, we review applications of these models to psychiatric questions, identify methodological challenges, and highlight trends of convergence among computational neuroimaging approaches. We conclude by outlining a translational neuromodeling strategy, highlighting the importance of openly available datasets from prospective patient studies for evaluating the clinical utility of computational models.
In many clinical and scientific situations the optimal neuroimaging sequence may not be known prior to scanning and may differ for each individual being scanned, depending on the exact nature and location of abnormalities. Despite this, the standard approach to data acquisition, in such situations, is to specify the sequence of neuroimaging scans prior to data acquisition and to apply the same scans to all individuals. In this paper, we propose and illustrate an alternative approach, in which data would be analysed as it is acquired and used to choose the future scanning sequence: Active Acquisition. We propose three Active Acquisition scenarios based around multiple MRI modalities. In Scenario 1, we propose a simple use of near-real time analysis to decide whether to acquire more or higher resolution data, or acquire data with a different field -of -view. In Scenario 2, we simulate how multimodal MR data could be actively acquired and combined with a decision tree to classify a known outcome variable (in the simple example here, age). In Scenario 3, we simulate using Bayesian optimisation to actively search across multiple MRI modalities to find those which are most abnormal. These simulations suggest that by actively acquiring data, the scanning sequence can be adapted to each individual. We also consider the many outstanding practical and technical challenges involving normative data acquisition, MR physics, statistical modelling and clinical relevance. Despite these, we argue that Active Acquisition allows for potentially far more powerful, sensitive or rapid data acquisition, and may open up different perspectives on individual differences, clinical conditions, and biomarker discovery.
Several dietary factors and their genetic modifiers play a role in neurological disease and affect the human brain. The structural and functional integrity of the living brain can be assessed using neuroimaging, enabling large-scale epidemiological studies to identify factors that help or harm the brain. Iron is one nutritional factor that comes entirely from our diet, and its storage and transport in the body are under strong genetic control. In this review, we discuss how neuroimaging can help to identify associations between brain integrity, genetic variations, and dietary factors such as iron. We also review iron's essential role in cognition, and we note some challenges and confounds involved in interpreting links between diet and brain health. Finally, we outline some recent discoveries regarding the genetics of iron and its effects on the brain, suggesting the promise of neuroimaging in revealing how dietary factors affect the brain.
Activation of retinoid X receptors (RXRs) has been proposed as a therapeutic mechanism for the treatment of neurodegeneration, including Alzheimer's and Parkinson's diseases. We previously reported radiolabeling of a Food and Drug Administration-approved RXR agonist, bexarotene, by copper-mediated [(11)C]CO2 fixation and preliminary positron emission tomography (PET) neuroimaging that demonstrated brain permeability in nonhuman primate with regional binding distribution consistent with RXRs. In this study, the brain uptake and saturability of [(11)C]bexarotene were studied in rats and nonhuman primates by PET imaging under baseline and greater target occupancy conditions. [(11)C]Bexarotene displays a high proportion of nonsaturable uptake in the brain and is unsuitable for RXR occupancy measurements in the central nervous system.
Optical coherence tomography (OCT) is a powerful technology for rapid volumetric imaging in biomedicine. The bright field imaging approach of conventional OCT systems is based on the detection of directly backscattered light, thereby waiving the wealth of information contained in the angular scattering distribution. Here we demonstrate that the unique features of few-mode fibers (FMF) enable simultaneous bright and dark field (BRAD) imaging for OCT. As backscattered light is picked up by the different modes of a FMF depending upon the angular scattering pattern, we obtain access to the directional scattering signatures of different tissues by decoupling illumination and detection paths. We exploit the distinct modal propagation properties of the FMF in concert with the long coherence lengths provided by modern wavelength-swept lasers to achieve multiplexing of the different modal responses into a combined OCT tomogram. We demonstrate BRAD sensing for distinguishing differently sized microparticles and showcase the performance of BRAD-OCT imaging with enhanced contrast for ex vivo tumorous tissue in glioblastoma and neuritic plaques in Alzheimer's disease.
Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining, and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate, and disseminate novel scientific methods, computational resources, and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval, and aggregation. Computational processing involves the necessary software, hardware, and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical, and phenotypic data and meta-data. Data mining refers to the process of automatically extracting data features, characteristics and associations, which are not readily visible by human exploration of the raw dataset. Result interpretation includes scientific visualization, community validation of findings and reproducible findings. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI) and the Laboratory of Neuro Imaging (LONI) at University of Southern California (USC). INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. In addition, the institute provides a large number of software tools for image and shape analysis, mathematical modeling, genomic sequence processing, and scientific visualization. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer, and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize diverse suites of software tools and web-services. These pipeline workflows are represented as portable XML objects which transfer the execution instructions and user specifications from the client user machine to remote pipeline servers for distributed computing. Using Alzheimer's and Parkinson's data, we provide several examples of translational applications using this infrastructure.
Unbalanced group-level models are common in neuroimaging. Typically, data for these models come from factorial experiments. As such, analyses typically take the form of an analysis of variance (ANOVA) within the framework of the general linear model (GLM). Although ANOVA theory is well established for the balanced case, in unbalanced designs there are multiple ways of decomposing the sums-of-squares of the data. This leads to several methods of forming test statistics when the model contains multiple factors and interactions. Although the Type I-III sums of squares have a long history of debate in the statistical literature, there has seemingly been no consideration of this aspect of the GLM in neuroimaging. In this paper we present an exposition of these different forms of hypotheses for the neuroimaging researcher, discussing their derivation as estimable functions of ANOVA models, and discussing the relative merits of each. Finally, we demonstrate how the different hypothesis tests can be implemented using contrasts in analysis software, presenting examples in SPM and FSL.
The Parkinson's Disease Progressive Neuroimaging Initiative (PDPNI) is a longitudinal observational clinical study. In PDPNI, the clinical and imaging data of patients diagnosed with Parkinsonian syndromes and Idiopathic rapid eye movement sleep behavior disorder (RBD) were longitudinally followed every two years, aiming to identify progression biomarkers of Parkinsonian syndromes through functional imaging modalities including FDG-PET, DAT-PET imaging, ASL MRI, and fMRI, as well as the treatment conditions, clinical symptoms, and clinical assessment results of patients. From February 2012 to March 2019, 224 subjects (including 48 healthy subjects and 176 patients with confirmed PDS) have been enrolled in PDPNI. The detailed clinical information and clinical assessment scores of all subjects were collected by neurologists from Huashan Hospital, Fudan University. All subjects enrolled in PDPNI were scanned with 18F-FDG PET, 11C-CFT PET, and MRI scan sequence. All data were collected in strict accordance with standardized data collection protocols.
For much of the past 30 years, investigations of auditory perception and language have been enhanced or even driven by the use of functional neuroimaging techniques that specialize in localization of central responses. Beginning with investigations using positron emission tomography (PET) and gradually shifting primarily to usage of functional magnetic resonance imaging (fMRI), auditory neuroimaging has greatly advanced our understanding of the organization and response properties of brain regions critical to the perception of and communication with the acoustic world in which we live. As the complexity of the questions being addressed has increased, the techniques, experiments and analyses applied have also become more nuanced and specialized. A brief review of the history of these investigations sets the stage for an overview and analysis of how these neuroimaging modalities are becoming ever more effective tools for understanding the auditory brain. We conclude with a brief discussion of open methodological issues as well as potential clinical applications for auditory neuroimaging. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
With the development of real-time and visualized neuroimaging techniques, the studies on the central mechanism of acupuncture analgesia gain increasing attention. The experimental pain models have been widely used in acupuncture-analgesia neuroimaging studies with quantitative and controlled advantages. This review aimed to analyze the study design and main findings of acupuncture neuroimaging studies to provide reference for future study. The original studies were collected and screened in English databases (PubMed, EMBASE, and Cochrane Library) and Chinese databases (Chinese Nation Knowledge Infrastructure, Chinese Biomedical Literature Database, the Chongqing VIP Database, and Wanfang Database). As a result, a total of 27 articles were included. Heat stimulation and electroacupuncture were the mostly used pain modeling method and acupuncture modality, respectively. The neuroimaging scanning process can be divided into two models and five subtypes. The anterior cingulate cortex and insula were the most commonly reported brain regions involved in acupuncture analgesia with experimental pain models.
Despite its initial promise, neuroimaging has not been widely translated into clinical psychiatry to assist in the prediction of diagnoses, prognoses, and optimal therapeutic strategies. Machine learning approaches may enhance the translational potential of neuroimaging because they specifically focus on overcoming biases by optimizing the generalizability of pipelines that measure complex brain patterns to predict targets at a single-subject level. This article introduces some fundamentals of a translational machine learning approach before selectively reviewing literature to-date. Promising initial results are then balanced by the description of limitations that should be considered in order to interpret existing research and maximize the possibility of future translation. Future directions are then presented in order to inspire further research and progress the field towards clinical translation.
Mucolipidosis type IV (MLIV) is an autosomal recessive disorder resulting from mutations in the MCOLN1 gene. This gene encodes the endosomal/lysosomal transient receptor potential channel protein mucolipin-1 (TRPML1). Affected patients suffer from neurodevelopmental abnormalities and progressive retinal dystrophy. In a prospective natural history study we hypothesized the presence of an additional slow cerebral neurodegenerative process. We have recruited 5 patients, tested their neurodevelopmental status, and measured cerebral regional volumes and white matter integrity using MRI yearly. Over a period of up to 3 years, MLIV patients remained neurologically stable. There was a trend for increased cortical and subcortical gray matter volumes and increased ventricular size, while white matter and cerebellar volumes decreased. Mean diffusivity (MD) was increased and fractional anisotropy (FA) values were below normal in all analyzed brain regions. There was a positive correlation between motor scores of the Vineland Scale and the FA values in the corticospinal tract (corr coef 0.39), and a negative correlation with the MD values (corr coef -0.50) in the same brain region. We conclude from these initial findings that deficiency in mucolipin-1 affects the entire brain but that there might be a selective regional cerebral neurodegenerative process in MLIV. In addition, these data suggest that diffusion-weighted imaging might be a good biomarker for following patients with MLIV. Therefore, our findings may be helpful for designing future clinical trials.
Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.
You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.
If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.
Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:
You can save any searches you perform for quick access to later from here.
We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.
If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.
Here are the facets that you can filter your papers by.
From here we'll present any options for the literature, such as exporting your current results.
If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.
Year:
Count: