Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 1 showing 1 ~ 20 papers out of 70 papers

The LONI Debabeler: a mediator for neuroimaging software.

  • Scott C Neu‎ et al.
  • NeuroImage‎
  • 2005‎

Brain image analysis often involves processing neuroimaging data with different software packages. Using different software packages together requires exchanging files between them; the output files of one package are used as input files to the next package in the processing sequence. File exchanges become problematic when different packages use different file formats or different conventions within the same file format. Although comprehensive medical image file formats have been developed, no one format exists that satisfies the needs of analyses that involve multiple processing algorithms. The LONI Debabeler acts as a mediator between neuroimaging software packages by automatically using an appropriate file translation to convert files between each pair of linked packages. These translations are built and edited using the Debabeler graphical interface and compensate for package-dependent variations that result in intrapackage incompatibilities. The Debabeler gives neuroimaging processing environments a configurable automaton for file translation and provides users a flexible application for developing robust solutions to translation problems.


False positives in neuroimaging genetics using voxel-based morphometry data.

  • Matt Silver‎ et al.
  • NeuroImage‎
  • 2011‎

Voxel-wise statistical inference is commonly used to identify significant experimental effects or group differences in both functional and structural studies of the living brain. Tests based on the size of spatially extended clusters of contiguous suprathreshold voxels are also widely used due to their typically increased statistical power. In "imaging genetics", such tests are used to identify regions of the brain that are associated with genetic variation. However, concerns have been raised about the adequate control of rejection rates in studies of this type. A previous study tested the effect of a set of 'null' SNPs on brain structure and function, and found that false positive rates were well-controlled. However, no similar analysis of false positive rates in an imaging genetic study using cluster size inference has yet been undertaken. We measured false positive rates in an investigation of the effect of 700 pre-selected null SNPs on grey matter volume using voxel-based morphometry (VBM). As VBM data exhibit spatially-varying smoothness, we used both non-stationary and stationary cluster size tests in our analysis. Image and genotype data on 181 subjects with mild cognitive impairment were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI). At a nominal significance level of 5%, false positive rates were found to be well-controlled (3.9-5.6%), using a relatively high cluster-forming threshold, α(c)=0.001, on images smoothed with a 12 mm Gaussian kernel. Tests were however anticonservative at lower cluster-forming thresholds (α(c)=0.01, 0.05), and for images smoothed using a 6mm Gaussian kernel. Here false positive rates ranged from 9.8 to 67.6%. In a further analysis, false positive rates using simulated data were observed to be well-controlled across a wide range of conditions. While motivated by imaging genetics, our findings apply to any VBM study, and suggest that parametric cluster size inference should only be used with high cluster-forming thresholds and smoothness. We would advocate the use of nonparametric methods in other cases.


Is it time to re-prioritize neuroimaging databases and digital repositories?

  • John Darrell Van Horn‎ et al.
  • NeuroImage‎
  • 2009‎

The development of in vivo brain imaging has lead to the collection of large quantities of digital information. In any individual research article, several tens of gigabytes-worth of data may be represented-collected across normal and patient samples. With the ease of collecting such data, there is increased desire for brain imaging datasets to be openly shared through sophisticated databases. However, very often the raw and pre-processed versions of these data are not available to researchers outside of the team that collected them. A range of neuroimaging databasing approaches has streamlined the transmission, storage, and dissemination of data from such brain imaging studies. Though early sociological and technical concerns have been addressed, they have not been ameliorated altogether for many in the field. In this article, we review the progress made in neuroimaging databases, their role in data sharing, data management, potential for the construction of brain atlases, recording data provenance, and value for re-analysis, new publication, and training. We feature the LONI IDA as an example of an archive being used as a source for brain atlas workflow construction, list several instances of other successful uses of image databases, and comment on archive sustainability. Finally, we suggest that, given these developments, now is the time for the neuroimaging community to re-prioritize large-scale databases as a valuable component of brain imaging science.


Fast and accurate modelling of longitudinal and repeated measures neuroimaging data.

  • Bryan Guillaume‎ et al.
  • NeuroImage‎
  • 2014‎

Despite the growing importance of longitudinal data in neuroimaging, the standard analysis methods make restrictive or unrealistic assumptions (e.g., assumption of Compound Symmetry--the state of all equal variances and equal correlations--or spatially homogeneous longitudinal correlations). While some new methods have been proposed to more accurately account for such data, these methods are based on iterative algorithms that are slow and failure-prone. In this article, we propose the use of the Sandwich Estimator method which first estimates the parameters of interest with a simple Ordinary Least Square model and second estimates variances/covariances with the "so-called" Sandwich Estimator (SwE) which accounts for the within-subject correlation existing in longitudinal data. Here, we introduce the SwE method in its classic form, and we review and propose several adjustments to improve its behaviour, specifically in small samples. We use intensive Monte Carlo simulations to compare all considered adjustments and isolate the best combination for neuroimaging data. We also compare the SwE method to other popular methods and demonstrate its strengths and weaknesses. Finally, we analyse a highly unbalanced longitudinal dataset from the Alzheimer's Disease Neuroimaging Initiative and demonstrate the flexibility of the SwE method to fit within- and between-subject effects in a single model. Software implementing this SwE method has been made freely available at http://warwick.ac.uk/tenichols/SwE.


Multi-source feature learning for joint analysis of incomplete multiple heterogeneous neuroimaging data.

  • Lei Yuan‎ et al.
  • NeuroImage‎
  • 2012‎

Analysis of incomplete data is a big challenge when integrating large-scale brain imaging datasets from different imaging modalities. In the Alzheimer's Disease Neuroimaging Initiative (ADNI), for example, over half of the subjects lack cerebrospinal fluid (CSF) measurements; an independent half of the subjects do not have fluorodeoxyglucose positron emission tomography (FDG-PET) scans; many lack proteomics measurements. Traditionally, subjects with missing measures are discarded, resulting in a severe loss of available information. In this paper, we address this problem by proposing an incomplete Multi-Source Feature (iMSF) learning method where all the samples (with at least one available data source) can be used. To illustrate the proposed approach, we classify patients from the ADNI study into groups with Alzheimer's disease (AD), mild cognitive impairment (MCI) and normal controls, based on the multi-modality data. At baseline, ADNI's 780 participants (172AD, 397 MCI, 211 NC), have at least one of four data types: magnetic resonance imaging (MRI), FDG-PET, CSF and proteomics. These data are used to test our algorithm. Depending on the problem being solved, we divide our samples according to the availability of data sources, and we learn shared sets of features with state-of-the-art sparse learning methods. To build a practical and robust system, we construct a classifier ensemble by combining our method with four other methods for missing value estimation. Comprehensive experiments with various parameters show that our proposed iMSF method and the ensemble model yield stable and promising results.


Changing the face of neuroimaging research: Comparing a new MRI de-facing technique with popular alternatives.

  • Christopher G Schwarz‎ et al.
  • NeuroImage‎
  • 2021‎

Recent advances in automated face recognition algorithms have increased the risk that de-identified research MRI scans may be re-identifiable by matching them to identified photographs using face recognition. A variety of software exist to de-face (remove faces from) MRI, but their ability to prevent face recognition has never been measured and their image modifications can alter automated brain measurements. In this study, we compared three popular de-facing techniques and introduce our mri_reface technique designed to minimize effects on brain measurements by replacing the face with a population average, rather than removing it. For each technique, we measured 1) how well it prevented automated face recognition (i.e. effects on exceptionally-motivated individuals) and 2) how it altered brain measurements from SPM12, FreeSurfer, and FSL (i.e. effects on the average user of de-identified data). Before de-facing, 97% of scans from a sample of 157 volunteers were correctly matched to photographs using automated face recognition. After de-facing with popular software, 28-38% of scans still retained enough data for successful automated face matching. Our proposed mri_reface had similar performance with the best existing method (fsl_deface) at preventing face recognition (28-30%) and it had the smallest effects on brain measurements in more pipelines than any other, but these differences were modest.


A computational neurodegenerative disease progression score: method and results with the Alzheimer's disease Neuroimaging Initiative cohort.

  • Bruno M Jedynak‎ et al.
  • NeuroImage‎
  • 2012‎

While neurodegenerative diseases are characterized by steady degeneration over relatively long timelines, it is widely believed that the early stages are the most promising for therapeutic intervention, before irreversible neuronal loss occurs. Developing a therapeutic response requires a precise measure of disease progression. However, since the early stages are for the most part asymptomatic, obtaining accurate measures of disease progression is difficult. Longitudinal databases of hundreds of subjects observed during several years with tens of validated biomarkers are becoming available, allowing the use of computational methods. We propose a widely applicable statistical methodology for creating a disease progression score (DPS), using multiple biomarkers, for subjects with a neurodegenerative disease. The proposed methodology was evaluated for Alzheimer's disease (AD) using the publicly available AD Neuroimaging Initiative (ADNI) database, yielding an Alzheimer's DPS or ADPS score for each subject and each time-point in the database. In addition, a common description of biomarker changes was produced allowing for an ordering of the biomarkers. The Rey Auditory Verbal Learning Test delayed recall was found to be the earliest biomarker to become abnormal. The group of biomarkers comprising the volume of the hippocampus and the protein concentration amyloid beta and Tau were next in the timeline, and these were followed by three cognitive biomarkers. The proposed methodology thus has potential to stage individuals according to their state of disease progression relative to a population and to deduce common behaviors of biomarkers in the disease itself.


Investigating the temporal pattern of neuroimaging-based brain age estimation as a biomarker for Alzheimer's Disease related neurodegeneration.

  • Alexei Taylor‎ et al.
  • NeuroImage‎
  • 2022‎

Neuroimaging-based brain-age estimation via machine learning has emerged as an important new approach for studying brain aging. The difference between one's estimated brain age and chronological age, the brain age gap (BAG), has been proposed as an Alzheimer's Disease (AD) biomarker. However, most past studies on the BAG have been cross-sectional. Quantifying longitudinal changes in an individual's BAG temporal pattern would likely improve prediction of AD progression and clinical outcome based on neurophysiological changes. To fill this gap, our study conducted predictive modeling using a large neuroimaging dataset with up to 8 years of follow-up to examine the temporal patterns of the BAG's trajectory and how it varies by subject-level characteristics (sex, APOEɛ4 carriership) and disease status. Specifically, we explored the pattern and rate of change in BAG over time in individuals who remain stable with normal cognition or mild cognitive impairment (MCI), as well as individuals who progress to clinical AD. Combining multimodal imaging data in a support vector regression model to estimate brain age yielded improved performance over single modality. Multilevel modeling results showed the BAG followed a linear increasing trajectory with a significantly faster rate in individuals with MCI who progressed to AD compared to cognitively normal or MCI individuals who did not progress. The dynamic changes in the BAG during AD progression were further moderated by sex and APOEɛ4 carriership. Our findings demonstrate the BAG as a potential biomarker for understanding individual specific temporal patterns related to AD progression.


FGWAS: Functional genome wide association analysis.

  • Chao Huang‎ et al.
  • NeuroImage‎
  • 2017‎

Functional phenotypes (e.g., subcortical surface representation), which commonly arise in imaging genetic studies, have been used to detect putative genes for complexly inherited neuropsychiatric and neurodegenerative disorders. However, existing statistical methods largely ignore the functional features (e.g., functional smoothness and correlation). The aim of this paper is to develop a functional genome-wide association analysis (FGWAS) framework to efficiently carry out whole-genome analyses of functional phenotypes. FGWAS consists of three components: a multivariate varying coefficient model, a global sure independence screening procedure, and a test procedure. Compared with the standard multivariate regression model, the multivariate varying coefficient model explicitly models the functional features of functional phenotypes through the integration of smooth coefficient functions and functional principal component analysis. Statistically, compared with existing methods for genome-wide association studies (GWAS), FGWAS can substantially boost the detection power for discovering important genetic variants influencing brain structure and function. Simulation studies show that FGWAS outperforms existing GWAS methods for searching sparse signals in an extremely large search space, while controlling for the family-wise error rate. We have successfully applied FGWAS to large-scale analysis of data from the Alzheimer's Disease Neuroimaging Initiative for 708 subjects, 30,000 vertices on the left and right hippocampal surfaces, and 501,584 SNPs.


Predicting individual brain functional connectivity using a Bayesian hierarchical model.

  • Tian Dai‎ et al.
  • NeuroImage‎
  • 2017‎

Network-oriented analysis of functional magnetic resonance imaging (fMRI), especially resting-state fMRI, has revealed important association between abnormal connectivity and brain disorders such as schizophrenia, major depression and Alzheimer's disease. Imaging-based brain connectivity measures have become a useful tool for investigating the pathophysiology, progression and treatment response of psychiatric disorders and neurodegenerative diseases. Recent studies have started to explore the possibility of using functional neuroimaging to help predict disease progression and guide treatment selection for individual patients. These studies provide the impetus to develop statistical methodology that would help provide predictive information on disease progression-related or treatment-related changes in neural connectivity. To this end, we propose a prediction method based on Bayesian hierarchical model that uses individual's baseline fMRI scans, coupled with relevant subject characteristics, to predict the individual's future functional connectivity. A key advantage of the proposed method is that it can improve the accuracy of individualized prediction of connectivity by combining information from both group-level connectivity patterns that are common to subjects with similar characteristics as well as individual-level connectivity features that are particular to the specific subject. Furthermore, our method also offers statistical inference tools such as predictive intervals that help quantify the uncertainty or variability of the predicted outcomes. The proposed prediction method could be a useful approach to predict the changes in individual patient's brain connectivity with the progression of a disease. It can also be used to predict a patient's post-treatment brain connectivity after a specified treatment regimen. Another utility of the proposed method is that it can be applied to test-retest imaging data to develop a more reliable estimator for individual functional connectivity. We show there exists a nice connection between our proposed estimator and a recently developed shrinkage estimator of connectivity measures in the neuroimaging community. We develop an expectation-maximization (EM) algorithm for estimation of the proposed Bayesian hierarchical model. Simulations studies are performed to evaluate the accuracy of our proposed prediction methods. We illustrate the application of the methods with two data examples: the longitudinal resting-state fMRI from ADNI2 study and the test-retest fMRI data from Kirby21 study. In both the simulation studies and the fMRI data applications, we demonstrate that the proposed methods provide more accurate prediction and more reliable estimation of individual functional connectivity as compared with alternative methods.


Benchmarking functional connectome-based predictive models for resting-state fMRI.

  • Kamalaker Dadi‎ et al.
  • NeuroImage‎
  • 2019‎

Functional connectomes reveal biomarkers of individual psychological or clinical traits. However, there is great variability in the analytic pipelines typically used to derive them from rest-fMRI cohorts. Here, we consider a specific type of studies, using predictive models on the edge weights of functional connectomes, for which we highlight the best modeling choices. We systematically study the prediction performances of models in 6 different cohorts and a total of 2000 individuals, encompassing neuro-degenerative (Alzheimer's, Post-traumatic stress disorder), neuro-psychiatric (Schizophrenia, Autism), drug impact (Cannabis use) clinical settings and psychological trait (fluid intelligence). The typical prediction procedure from rest-fMRI consists of three main steps: defining brain regions, representing the interactions, and supervised learning. For each step we benchmark typical choices: 8 different ways of defining regions -either pre-defined or generated from the rest-fMRI data- 3 measures to build functional connectomes from the extracted time-series, and 10 classification models to compare functional interactions across subjects. Our benchmarks summarize more than 240 different pipelines and outline modeling choices that show consistent prediction performances in spite of variations in the populations and sites. We find that regions defined from functional data work best; that it is beneficial to capture between-region interactions with tangent-based parametrization of covariances, a midway between correlations and partial correlation; and that simple linear predictors such as a logistic regression give the best predictions. Our work is a step forward to establishing reproducible imaging-based biomarkers for clinical settings.


A longitudinal model for functional connectivity networks using resting-state fMRI.

  • Brian Hart‎ et al.
  • NeuroImage‎
  • 2018‎

Many neuroimaging studies collect functional magnetic resonance imaging (fMRI) data in a longitudinal manner. However, the current fMRI literature lacks a general framework for analyzing functional connectivity (FC) networks in fMRI data obtained from a longitudinal study. In this work, we build a novel longitudinal FC model using a variance components approach. First, for all subjects' visits, we account for the autocorrelation inherent in the fMRI time series data using a non-parametric technique. Second, we use a generalized least squares approach to estimate 1) the within-subject variance component shared across the population, 2) the baseline FC strength, and 3) the FC's longitudinal trend. Our novel method for longitudinal FC networks seeks to account for the within-subject dependence across multiple visits, the variability due to the subjects being sampled from a population, and the autocorrelation present in fMRI time series data, while restricting the number of parameters in order to make the method computationally feasible and stable. We develop a permutation testing procedure to draw valid inference on group differences in the baseline FC network and change in FC over longitudinal time between a set of patients and a comparable set of controls. To examine performance, we run a series of simulations and apply the model to longitudinal fMRI data collected from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Overall, we found no difference in the global FC network between Alzheimer's disease patients and healthy controls, but did find differing local aging patterns in the FC between the left hippocampus and the posterior cingulate cortex.


Sparse SPM: Group Sparse-dictionary learning in SPM framework for resting-state functional connectivity MRI analysis.

  • Young-Beom Lee‎ et al.
  • NeuroImage‎
  • 2016‎

Recent studies of functional connectivity MR imaging have revealed that the default-mode network activity is disrupted in diseases such as Alzheimer's disease (AD). However, there is not yet a consensus on the preferred method for resting-state analysis. Because the brain is reported to have complex interconnected networks according to graph theoretical analysis, the independency assumption, as in the popular independent component analysis (ICA) approach, often does not hold. Here, rather than using the independency assumption, we present a new statistical parameter mapping (SPM)-type analysis method based on a sparse graph model where temporal dynamics at each voxel position are described as a sparse combination of global brain dynamics. In particular, a new concept of a spatially adaptive design matrix has been proposed to represent local connectivity that shares the same temporal dynamics. If we further assume that local network structures within a group are similar, the estimation problem of global and local dynamics can be solved using sparse dictionary learning for the concatenated temporal data across subjects. Moreover, under the homoscedasticity variance assumption across subjects and groups that is often used in SPM analysis, the aforementioned individual and group analyses using sparse dictionary learning can be accurately modeled by a mixed-effect model, which also facilitates a standard SPM-type group-level inference using summary statistics. Using an extensive resting fMRI data set obtained from normal, mild cognitive impairment (MCI), and Alzheimer's disease patient groups, we demonstrated that the changes in the default mode network extracted by the proposed method are more closely correlated with the progression of Alzheimer's disease.


Does feature selection improve classification accuracy? Impact of sample size and feature selection on classification using anatomical magnetic resonance images.

  • Carlton Chu‎ et al.
  • NeuroImage‎
  • 2012‎

There are growing numbers of studies using machine learning approaches to characterize patterns of anatomical difference discernible from neuroimaging data. The high-dimensionality of image data often raises a concern that feature selection is needed to obtain optimal accuracy. Among previous studies, mostly using fixed sample sizes, some show greater predictive accuracies with feature selection, whereas others do not. In this study, we compared four common feature selection methods. 1) Pre-selected region of interests (ROIs) that are based on prior knowledge. 2) Univariate t-test filtering. 3) Recursive feature elimination (RFE), and 4) t-test filtering constrained by ROIs. The predictive accuracies achieved from different sample sizes, with and without feature selection, were compared statistically. To demonstrate the effect, we used grey matter segmented from the T1-weighted anatomical scans collected by the Alzheimer's disease Neuroimaging Initiative (ADNI) as the input features to a linear support vector machine classifier. The objective was to characterize the patterns of difference between Alzheimer's disease (AD) patients and cognitively normal subjects, and also to characterize the difference between mild cognitive impairment (MCI) patients and normal subjects. In addition, we also compared the classification accuracies between MCI patients who converted to AD and MCI patients who did not convert within the period of 12 months. Predictive accuracies from two data-driven feature selection methods (t-test filtering and RFE) were no better than those achieved using whole brain data. We showed that we could achieve the most accurate characterizations by using prior knowledge of where to expect neurodegeneration (hippocampus and parahippocampal gyrus). Therefore, feature selection does improve the classification accuracies, but it depends on the method adopted. In general, larger sample sizes yielded higher accuracies with less advantage obtained by using knowledge from the existing literature.


FVGWAS: Fast voxelwise genome wide association analysis of large-scale imaging genetic data.

  • Meiyan Huang‎ et al.
  • NeuroImage‎
  • 2015‎

More and more large-scale imaging genetic studies are being widely conducted to collect a rich set of imaging, genetic, and clinical data to detect putative genes for complexly inherited neuropsychiatric and neurodegenerative disorders. Several major big-data challenges arise from testing genome-wide (NC>12 million known variants) associations with signals at millions of locations (NV~10(6)) in the brain from thousands of subjects (n~10(3)). The aim of this paper is to develop a Fast Voxelwise Genome Wide Association analysiS (FVGWAS) framework to efficiently carry out whole-genome analyses of whole-brain data. FVGWAS consists of three components including a heteroscedastic linear model, a global sure independence screening (GSIS) procedure, and a detection procedure based on wild bootstrap methods. Specifically, for standard linear association, the computational complexity is O (nNVNC) for voxelwise genome wide association analysis (VGWAS) method compared with O ((NC+NV)n(2)) for FVGWAS. Simulation studies show that FVGWAS is an efficient method of searching sparse signals in an extremely large search space, while controlling for the family-wise error rate. Finally, we have successfully applied FVGWAS to a large-scale imaging genetic data analysis of ADNI data with 708 subjects, 193,275voxels in RAVENS maps, and 501,584 SNPs, and the total processing time was 203,645s for a single CPU. Our FVGWAS may be a valuable statistical toolbox for large-scale imaging genetic analysis as the field is rapidly advancing with ultra-high-resolution imaging and whole-genome sequencing.


Obesity gene NEGR1 associated with white matter integrity in healthy young adults.

  • Emily L Dennis‎ et al.
  • NeuroImage‎
  • 2014‎

Obesity is a crucial public health issue in developed countries, with implications for cardiovascular and brain health as we age. A number of commonly-carried genetic variants are associated with obesity. Here we aim to see whether variants in obesity-associated genes--NEGR1, FTO, MTCH2, MC4R, LRRN6C, MAP2K5, FAIM2, SEC16B, ETV5, BDNF-AS, ATXN2L, ATP2A1, KCTD15, and TNN13K--are associated with white matter microstructural properties, assessed by high angular resolution diffusion imaging (HARDI) in young healthy adults between 20 and 30 years of age from the Queensland Twin Imaging study (QTIM). We began with a multi-locus approach testing how a number of common genetic risk factors for obesity at the single nucleotide polymorphism (SNP) level may jointly influence white matter integrity throughout the brain and found a wide spread genetic effect. Risk allele rs2815752 in NEGR1 was most associated with lower white matter integrity across a substantial portion of the brain. Across the area of significance in the bilateral posterior corona radiata, each additional copy of the risk allele was associated with a 2.2% lower average FA. This is the first study to find an association between an obesity risk gene and differences in white matter integrity. As our subjects were young and healthy, our results suggest that NEGR1 has effects on brain structure independent of its effect on obesity.


A probabilistic atlas of human brainstem pathways based on connectome imaging data.

  • Yuchun Tang‎ et al.
  • NeuroImage‎
  • 2018‎

The brainstem is a critical structure that regulates vital autonomic functions, houses the cranial nerves and their nuclei, relays motor and sensory information between the brain and spinal cord, and modulates cognition, mood, and emotions. As a primary relay center, the fiber pathways of the brainstem include efferent and afferent connections among the cerebral cortex, spinal cord, and cerebellum. While diffusion MRI has been successfully applied to map various brain pathways, its application for the in vivo imaging of the brainstem pathways has been limited due to inadequate resolution and large susceptibility-induced distortion artifacts. With the release of high-resolution data from the Human Connectome Project (HCP), there is increasing interest in mapping human brainstem pathways. Previous works relying on HCP data to study brainstem pathways, however, did not consider the prevalence (>80%) of large distortions in the brainstem even after the application of correction procedures from the HCP-Pipeline. They were also limited in the lack of adequate consideration of subject variability in either fiber pathways or region of interests (ROIs) used for bundle reconstruction. To overcome these limitations, we develop in this work a probabilistic atlas of 23 major brainstem bundles using high-quality HCP data passing rigorous quality control. For the large-scale data from the 500-Subject release of HCP, we conducted extensive quality controls to exclude subjects with severe distortions in the brainstem area. After that, we developed a systematic protocol to manually delineate 1300 ROIs on 20 HCP subjects (10 males; 10 females) for the reconstruction of fiber bundles using tractography techniques. Finally, we leveraged our novel connectome modeling techniques including high order fiber orientation distribution (FOD) reconstruction from multi-shell diffusion imaging and topography-preserving tract filtering algorithms to successfully reconstruct the 23 fiber bundles for each subject, which were then used to calculate the probabilistic atlases in the MNI152 space for public release. In our experimental results, we demonstrate that our method yielded anatomically faithful reconstruction of the brainstem pathways and achieved improved performance in comparison with an existing atlas of cerebellar peduncles based on HCP data. These atlases have been publicly released on NITRIC (https://www.nitrc.org/projects/brainstem_atlas/) and can be readily used by brain imaging researchers interested in studying brainstem pathways.


PETPVE12: an SPM toolbox for Partial Volume Effects correction in brain PET - Application to amyloid imaging with AV45-PET.

  • Gabriel Gonzalez-Escamilla‎ et al.
  • NeuroImage‎
  • 2017‎

Positron emission tomography (PET) allows detecting molecular brain changes in vivo. However, the accuracy of PET is limited by partial volume effects (PVE) that affects quantitative analysis and visual interpretation of the images. Although PVE-correction methods have been shown to effectively increase the correspondence of the measured signal with the true regional tracer uptake, these procedures are still not commonly applied, neither in clinical nor in research settings. Here, we present an implementation of well validated PVE-correction procedures as a SPM toolbox, PETPVE12, for automated processing. We demonstrate its utility by a comprehensive analysis of the effects of PVE-correction on amyloid-sensitive AV45-PET data from 85 patients with Alzheimer's disease (AD) and 179 cognitively normal (CN) elderly. Effects of PVE-correction on global cortical standard uptake value ratios (SUVR) and the power of diagnostic group separation were assessed for the region-wise geometric transfer matrix method (PVEc-GTM), as well as for the 3-compartmental voxel-wise "Müller-Gärtner" method (PVEc-MG). Both PVE-correction methods resulted in decreased global cortical SUVRs in the low to middle range of SUVR values, and in increased global cortical SUVRs at the high values. As a consequence, average SUVR of the CN group was reduced, whereas average SUVR of the AD group was increased by PVE-correction. These effects were also reflected in increased accuracies of group discrimination after PVEc-GTM (AUC=0.86) and PVEc-MG (AUC=0.89) compared to standard non-corrected SUVR (AUC=0.84). Voxel-wise analyses of PVEc-MG corrected data also demonstrated improved detection of regionally increased AV45 SUVR values in AD patients. These findings complement the growing evidence for a beneficial effect of PVE-correction in quantitative analysis of amyloid-sensitive PET data. The novel PETPVE12 toolbox significantly facilitates the application of PVE-correction, particularly within SPM-based processing pipelines. This is expected to foster the use of PVE-correction in brain PET for more widespread use. The toolbox is freely available at http://www.fil.ion.ucl.ac.uk/spm/ext/#PETPVE12.


Retrospective motion artifact correction of structural MRI images using deep learning improves the quality of cortical surface reconstructions.

  • Ben A Duffy‎ et al.
  • NeuroImage‎
  • 2021‎

Head motion during MRI acquisition presents significant challenges for neuroimaging analyses. In this work, we present a retrospective motion correction framework built on a Fourier domain motion simulation model combined with established 3D convolutional neural network (CNN) architectures. Quantitative evaluation metrics were used to validate the method on three separate multi-site datasets. The 3D CNN was trained using motion-free images that were corrupted using simulated artifacts. CNN based correction successfully diminished the severity of artifacts on real motion affected data on a separate test dataset as measured by significant improvements in image quality metrics compared to a minimal motion reference image. On the test set of 13 image pairs, the mean peak signal-to-noise-ratio was improved from 31.7 to 33.3 dB. Furthermore, improvements in cortical surface reconstruction quality were demonstrated using a blinded manual quality assessment on the Parkinson's Progression Markers Initiative (PPMI) dataset. Upon applying the correction algorithm, out of a total of 617 images, the number of quality control failures was reduced from 61 to 38. On this same dataset, we investigated whether motion correction resulted in a more statistically significant relationship between cortical thickness and Parkinson's disease. Before correction, significant cortical thinning was found to be restricted to limited regions within the temporal and frontal lobes. After correction, there was found to be more widespread and significant cortical thinning bilaterally across the temporal lobes and frontal cortex. Our results highlight the utility of image domain motion correction for use in studies with a high prevalence of motion artifacts, such as studies of movement disorders as well as infant and pediatric subjects.


Disentangling time series between brain tissues improves fMRI data quality using a time-dependent deep neural network.

  • Zhengshi Yang‎ et al.
  • NeuroImage‎
  • 2020‎

Functional MRI (fMRI) is a prominent imaging technique to probe brain function, however, a substantial proportion of noise from multiple sources influences the reliability and reproducibility of fMRI data analysis and limits its clinical applications. Extensive effort has been devoted to improving fMRI data quality, but in the last two decades, there is no consensus reached which technique is more effective. In this study, we developed a novel deep neural network for denoising fMRI data, named denoising neural network (DeNN). This deep neural network is 1) applicable without requiring externally recorded data to model noise; 2) spatially and temporally adaptive to the variability of noise in different brain regions at different time points; 3) automated to output denoised data without manual interference; 4) trained and applied on each subject separately and 5) insensitive to the repetition time (TR) of fMRI data. When we compared DeNN with a number of nuisance regression methods for denoising fMRI data from Alzheimer's Disease Neuroimaging Initiative (ADNI) database, only DeNN had connectivity for functionally uncorrelated regions close to zero and successfully identified unbiased correlations between the posterior cingulate cortex seed and multiple brain regions within the default mode network or task positive network. The whole brain functional connectivity maps computed with DeNN-denoised data are approximately three times as homogeneous as the functional connectivity maps computed with raw data. Furthermore, the improved homogeneity strengthens rather than weakens the statistical power of fMRI in detecting intrinsic functional differences between cognitively normal subjects and subjects with Alzheimer's disease.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: