Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 1 showing 1 ~ 13 papers out of 13 papers

Iterative framework for the joint segmentation and CT synthesis of MR images: application to MRI-only radiotherapy treatment planning.

  • Ninon Burgos‎ et al.
  • Physics in medicine and biology‎
  • 2017‎

To tackle the problem of magnetic resonance imaging (MRI)-only radiotherapy treatment planning (RTP), we propose a multi-atlas information propagation scheme that jointly segments organs and generates pseudo x-ray computed tomography (CT) data from structural MR images (T1-weighted and T2-weighted). As the performance of the method strongly depends on the quality of the atlas database composed of multiple sets of aligned MR, CT and segmented images, we also propose a robust way of registering atlas MR and CT images, which combines structure-guided registration, and CT and MR image synthesis. We first evaluated the proposed framework in terms of segmentation and CT synthesis accuracy on 15 subjects with prostate cancer. The segmentations obtained with the proposed method were compared using the Dice score coefficient (DSC) to the manual segmentations. Mean DSCs of 0.73, 0.90, 0.77 and 0.90 were obtained for the prostate, bladder, rectum and femur heads, respectively. The mean absolute error (MAE) and the mean error (ME) were computed between the reference CTs (non-rigidly aligned to the MRs) and the pseudo CTs generated with the proposed method. The MAE was on average [Formula: see text] HU and the ME [Formula: see text] HU. We then performed a dosimetric evaluation by re-calculating plans on the pseudo CTs and comparing them to the plans optimised on the reference CTs. We compared the cumulative dose volume histograms (DVH) obtained for the pseudo CTs to the DVH obtained for the reference CTs in the planning target volume (PTV) located in the prostate, and in the organs at risk at different DVH points. We obtained average differences of [Formula: see text] in the PTV for [Formula: see text], and between [Formula: see text] and 0.05% in the PTV, bladder, rectum and femur heads for D mean and [Formula: see text]. Overall, we demonstrate that the proposed framework is able to automatically generate accurate pseudo CT images and segmentations in the pelvic region, potentially bypassing the need for CT scan for accurate RTP.


Voxelwise atlas rating for computer assisted diagnosis: Application to congenital heart diseases of the great arteries.

  • Maria A Zuluaga‎ et al.
  • Medical image analysis‎
  • 2015‎

Atlas-based analysis methods rely on the morphological similarity between the atlas and target images, and on the availability of labelled images. Problems can arise when the deformations introduced by pathologies affect the similarity between the atlas and a patient's image. The aim of this work is to exploit the morphological dissimilarities between atlas databases and pathological images to diagnose the underlying clinical condition, while avoiding the dependence on labelled images. We propose a voxelwise atlas rating approach (VoxAR) relying on multiple atlas databases, each representing a particular condition. Using a local image similarity measure to assess the morphological similarity between the atlas and target images, a rating map displaying for each voxel the condition of the atlases most similar to the target is defined. The final diagnosis is established by assigning the condition of the database the most represented in the rating map. We applied the method to diagnose three different conditions associated with dextro-transposition of the great arteries, a congenital heart disease. The proposed approach outperforms other state-of-the-art methods using annotated images, with an accuracy of 97.3% when evaluated on a set of 60 whole heart MR images containing healthy and pathological subjects using cross validation.


Study protocol: Insight 46 - a neuroscience sub-study of the MRC National Survey of Health and Development.

  • Christopher A Lane‎ et al.
  • BMC neurology‎
  • 2017‎

Increasing age is the biggest risk factor for dementia, of which Alzheimer's disease is the commonest cause. The pathological changes underpinning Alzheimer's disease are thought to develop at least a decade prior to the onset of symptoms. Molecular positron emission tomography and multi-modal magnetic resonance imaging allow key pathological processes underpinning cognitive impairment - including β-amyloid depostion, vascular disease, network breakdown and atrophy - to be assessed repeatedly and non-invasively. This enables potential determinants of dementia to be delineated earlier, and therefore opens a pre-symptomatic window where intervention may prevent the onset of cognitive symptoms.


Reproducible evaluation of classification methods in Alzheimer's disease: Framework and application to MRI and PET data.

  • Jorge Samper-González‎ et al.
  • NeuroImage‎
  • 2018‎

A large number of papers have introduced novel machine learning and feature extraction methods for automatic classification of Alzheimer's disease (AD). However, while the vast majority of these works use the public dataset ADNI for evaluation, they are difficult to reproduce because different key components of the validation are often not readily available. These components include selected participants and input data, image preprocessing and cross-validation procedures. The performance of the different approaches is also difficult to compare objectively. In particular, it is often difficult to assess which part of the method (e.g. preprocessing, feature extraction or classification algorithms) provides a real improvement, if any. In the present paper, we propose a framework for reproducible and objective classification experiments in AD using three publicly available datasets (ADNI, AIBL and OASIS). The framework comprises: i) automatic conversion of the three datasets into a standard format (BIDS); ii) a modular set of preprocessing pipelines, feature extraction and classification methods, together with an evaluation framework, that provide a baseline for benchmarking the different components. We demonstrate the use of the framework for a large-scale evaluation on 1960 participants using T1 MRI and FDG PET data. In this evaluation, we assess the influence of different modalities, preprocessing, feature types (regional or voxel-based features), classifiers, training set sizes and datasets. Performances were in line with the state-of-the-art. FDG PET outperformed T1 MRI for all classification tasks. No difference in performance was found for the use of different atlases, image smoothing, partial volume correction of FDG PET images, or feature type. Linear SVM and L2-logistic regression resulted in similar performance and both outperformed random forests. The classification performance increased along with the number of subjects used for training. Classifiers trained on ADNI generalized well to AIBL and OASIS. All the code of the framework and the experiments is publicly available: general-purpose tools have been integrated into the Clinica software (www.clinica.run) and the paper-specific code is available at: https://gitlab.icm-institute.org/aramislab/AD-ML.


AD Course Map charts Alzheimer's disease progression.

  • Igor Koval‎ et al.
  • Scientific reports‎
  • 2021‎

Alzheimer's disease (AD) is characterized by the progressive alterations seen in brain images which give rise to the onset of various sets of symptoms. The variability in the dynamics of changes in both brain images and cognitive impairments remains poorly understood. This paper introduces AD Course Map a spatiotemporal atlas of Alzheimer's disease progression. It summarizes the variability in the progression of a series of neuropsychological assessments, the propagation of hypometabolism and cortical thinning across brain regions and the deformation of the shape of the hippocampus. The analysis of these variations highlights strong genetic determinants for the progression, like possible compensatory mechanisms at play during disease progression. AD Course Map also predicts the patient's cognitive decline with a better accuracy than the 56 methods benchmarked in the open challenge TADPOLE. Finally, AD Course Map is used to simulate cohorts of virtual patients developing Alzheimer's disease. AD Course Map offers therefore new tools for exploring the progression of AD and personalizing patients care.


Ensemble Learning of Convolutional Neural Network, Support Vector Machine, and Best Linear Unbiased Predictor for Brain Age Prediction: ARAMIS Contribution to the Predictive Analytics Competition 2019 Challenge.

  • Baptiste Couvy-Duchesne‎ et al.
  • Frontiers in psychiatry‎
  • 2020‎

We ranked third in the Predictive Analytics Competition (PAC) 2019 challenge by achieving a mean absolute error (MAE) of 3.33 years in predicting age from T1-weighted MRI brain images. Our approach combined seven algorithms that allow generating predictions when the number of features exceeds the number of observations, in particular, two versions of best linear unbiased predictor (BLUP), support vector machine (SVM), two shallow convolutional neural networks (CNNs), and the famous ResNet and Inception V1. Ensemble learning was derived from estimating weights via linear regression in a hold-out subset of the training sample. We further evaluated and identified factors that could influence prediction accuracy: choice of algorithm, ensemble learning, and features used as input/MRI image processing. Our prediction error was correlated with age, and absolute error was greater for older participants, suggesting to increase the training sample for this subgroup. Our results may be used to guide researchers to build age predictors on healthy individuals, which can be used in research and in the clinics as non-specific predictors of disease status.


A multi-centre evaluation of eleven clinically feasible brain PET/MRI attenuation correction techniques using a large cohort of patients.

  • Claes N Ladefoged‎ et al.
  • NeuroImage‎
  • 2017‎

To accurately quantify the radioactivity concentration measured by PET, emission data need to be corrected for photon attenuation; however, the MRI signal cannot easily be converted into attenuation values, making attenuation correction (AC) in PET/MRI challenging. In order to further improve the current vendor-implemented MR-AC methods for absolute quantification, a number of prototype methods have been proposed in the literature. These can be categorized into three types: template/atlas-based, segmentation-based, and reconstruction-based. These proposed methods in general demonstrated improvements compared to vendor-implemented AC, and many studies report deviations in PET uptake after AC of only a few percent from a gold standard CT-AC. Using a unified quantitative evaluation with identical metrics, subject cohort, and common CT-based reference, the aims of this study were to evaluate a selection of novel methods proposed in the literature, and identify the ones suitable for clinical use.


Multi-contrast attenuation map synthesis for PET/MR scanners: assessment on FDG and Florbetapir PET tracers.

  • Ninon Burgos‎ et al.
  • European journal of nuclear medicine and molecular imaging‎
  • 2015‎

Positron Emission Tomography/Magnetic Resonance Imaging (PET/MR) scanners are expected to offer a new range of clinical applications. Attenuation correction is an essential requirement for quantification of PET data but MRI images do not directly provide a patient-specific attenuation map. Methods We further validate and extend a Computed Tomography (CT) and attenuation map (μ-map) synthesis method based on pre-acquired MRI-CT image pairs. The validation consists of comparing the CT images synthesised with the proposed method to the original CT images. PET images were acquired using two different tracers ((18)F-FDG and (18)F-florbetapir). They were then reconstructed and corrected for attenuation using the synthetic μ-maps and compared to the reference PET images corrected with the CT-based μ-maps. During the validation, we observed that the CT synthesis was inaccurate in areas such as the neck and the cerebellum, and propose a refinement to mitigate these problems, as well as an extension of the method to multi-contrast MRI data. Results With the improvements proposed, a significant enhancement in CT synthesis, which results in a reduced absolute error and a decrease in the bias when reconstructing PET images, was observed. For both tracers, on average, the absolute difference between the reference PET images and the PET images corrected with the proposed method was less than 2%, with a bias inferior to 1%. Conclusion With the proposed method, attenuation information can be accurately derived from MRI images by synthesising CT using routine anatomical sequences. MRI sequences, or combination of sequences, can be used to synthesise CT images, as long as they provide sufficient anatomical information.


An Automated Pipeline for the Analysis of PET Data on the Cortical Surface.

  • Arnaud Marcoux‎ et al.
  • Frontiers in neuroinformatics‎
  • 2018‎

We present a fully automatic pipeline for the analysis of PET data on the cortical surface. Our pipeline combines tools from FreeSurfer and PETPVC, and consists of (i) co-registration of PET and T1-w MRI (T1) images, (ii) intensity normalization, (iii) partial volume correction, (iv) robust projection of the PET signal onto the subject's cortical surface, (v) spatial normalization to a template, and (vi) atlas statistics. We evaluated the performance of the proposed workflow by performing group comparisons and showed that the approach was able to identify the areas of hypometabolism characteristic of different dementia syndromes: Alzheimer's disease (AD) and both the semantic and logopenic variants of primary progressive aphasia. We also showed that these results were comparable to those obtained with a standard volume-based approach. We then performed individual classifications and showed that vertices can be used as features to differentiate cognitively normal and AD subjects. This pipeline is integrated into Clinica, an open-source software platform for neuroscience studies available at www.clinica.run.


Reduced acquisition time PET pharmacokinetic modelling using simultaneous ASL-MRI: proof of concept.

  • Catherine J Scott‎ et al.
  • Journal of cerebral blood flow and metabolism : official journal of the International Society of Cerebral Blood Flow and Metabolism‎
  • 2019‎

Pharmacokinetic modelling on dynamic positron emission tomography (PET) data is a quantitative technique. However, the long acquisition time is prohibitive for routine clinical use. Instead, the semi-quantitative standardised uptake value ratio (SUVR) from a shorter static acquisition is used, despite its sensitivity to blood flow confounding longitudinal analysis. A method has been proposed to reduce the dynamic acquisition time for quantification by incorporating cerebral blood flow (CBF) information from arterial spin labelling (ASL) magnetic resonance imaging (MRI) into the pharmacokinetic modelling. In this work, we optimise and validate this framework for a study of ageing and preclinical Alzheimer's disease. This methodology adapts the simplified reference tissue model (SRTM) for a reduced acquisition time (RT-SRTM) and is applied to [18F]-florbetapir PET data for amyloid-β quantification. Evaluation shows that the optimised RT-SRTM can achieve amyloid burden estimation from a 30-min PET/MR acquisition which is comparable with the gold standard SRTM applied to 60 min of PET data. Conversely, SUVR showed a significantly higher error and bias, and a statistically significant correlation with tracer delivery due to the influence of blood flow. The optimised RT-SRTM produced amyloid burden estimates which were uncorrelated with tracer delivery indicating its suitability for longitudinal studies.


Pilot study of repeated blood-brain barrier disruption in patients with mild Alzheimer's disease with an implantable ultrasound device.

  • Stéphane Epelbaum‎ et al.
  • Alzheimer's research & therapy‎
  • 2022‎

Temporary disruption of the blood-brain barrier (BBB) using pulsed ultrasound leads to the clearance of both amyloid and tau from the brain, increased neurogenesis, and mitigation of cognitive decline in pre-clinical models of Alzheimer's disease (AD) while also increasing BBB penetration of therapeutic antibodies. The goal of this pilot clinical trial was to investigate the safety and efficacy of this approach in patients with mild AD using an implantable ultrasound device.


Clinica: An Open-Source Software Platform for Reproducible Clinical Neuroscience Studies.

  • Alexandre Routier‎ et al.
  • Frontiers in neuroinformatics‎
  • 2021‎

We present Clinica (www.clinica.run), an open-source software platform designed to make clinical neuroscience studies easier and more reproducible. Clinica aims for researchers to (i) spend less time on data management and processing, (ii) perform reproducible evaluations of their methods, and (iii) easily share data and results within their institution and with external collaborators. The core of Clinica is a set of automatic pipelines for processing and analysis of multimodal neuroimaging data (currently, T1-weighted MRI, diffusion MRI, and PET data), as well as tools for statistics, machine learning, and deep learning. It relies on the brain imaging data structure (BIDS) for the organization of raw neuroimaging datasets and on established tools written by the community to build its pipelines. It also provides converters of public neuroimaging datasets to BIDS (currently ADNI, AIBL, OASIS, and NIFD). Processed data include image-valued scalar fields (e.g., tissue probability maps), meshes, surface-based scalar fields (e.g., cortical thickness maps), or scalar outputs (e.g., regional averages). These data follow the ClinicA Processed Structure (CAPS) format which shares the same philosophy as BIDS. Consistent organization of raw and processed neuroimaging files facilitates the execution of single pipelines and of sequences of pipelines, as well as the integration of processed data into statistics or machine learning frameworks. The target audience of Clinica is neuroscientists or clinicians conducting clinical neuroscience studies involving multimodal imaging, and researchers developing advanced machine learning algorithms applied to neuroimaging data.


Reproducible Evaluation of Diffusion MRI Features for Automatic Classification of Patients with Alzheimer's Disease.

  • Junhao Wen‎ et al.
  • Neuroinformatics‎
  • 2021‎

Diffusion MRI is the modality of choice to study alterations of white matter. In past years, various works have used diffusion MRI for automatic classification of Alzheimer's disease. However, classification performance obtained with different approaches is difficult to compare because of variations in components such as input data, participant selection, image preprocessing, feature extraction, feature rescaling (FR), feature selection (FS) and cross-validation (CV) procedures. Moreover, these studies are also difficult to reproduce because these different components are not readily available. In a previous work (Samper-González et al. 2018), we propose an open-source framework for the reproducible evaluation of AD classification from T1-weighted (T1w) MRI and PET data. In the present paper, we first extend this framework to diffusion MRI data. Specifically, we add: conversion of diffusion MRI ADNI data into the BIDS standard and pipelines for diffusion MRI preprocessing and feature extraction. We then apply the framework to compare different components. First, FS has a positive impact on classification results: highest balanced accuracy (BA) improved from 0.76 to 0.82 for task CN vs AD. Secondly, voxel-wise features generally gives better performance than regional features. Fractional anisotropy (FA) and mean diffusivity (MD) provided comparable results for voxel-wise features. Moreover, we observe that the poor performance obtained in tasks involving MCI were potentially caused by the small data samples, rather than by the data imbalance. Furthermore, no extensive classification difference exists for different degree of smoothing and registration methods. Besides, we demonstrate that using non-nested validation of FS leads to unreliable and over-optimistic results: 5% up to 40% relative increase in BA. Lastly, with proper FR and FS, the performance of diffusion MRI features is comparable to that of T1w MRI. All the code of the framework and the experiments are publicly available: general-purpose tools have been integrated into the Clinica software package ( www.clinica.run ) and the paper-specific code is available at: https://github.com/aramis-lab/AD-ML .


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: