Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 1 showing 1 ~ 8 papers out of 8 papers

Derived Data Storage and Exchange Workflow for Large-Scale Neuroimaging Analyses on the BIRN Grid.

  • David B Keator‎ et al.
  • Frontiers in neuroinformatics‎
  • 2009‎

Organizing and annotating biomedical data in structured ways has gained much interest and focus in the last 30 years. Driven by decreases in digital storage costs and advances in genetics sequencing, imaging, electronic data collection, and microarray technologies, data is being collected at an ever increasing rate. The need to store and exchange data in meaningful ways in support of data analysis, hypothesis testing and future collaborative use is pervasive. Because trans-disciplinary projects rely on effective use of data from many domains, there is a genuine interest in informatics community on how best to store and combine this data while maintaining a high level of data quality and documentation. The difficulties in sharing and combining raw data become amplified after post-processing and/or data analysis in which the new dataset of interest is a function of the original data and may have been collected by multiple collaborating sites. Simple meta-data, documenting which subject and version of data were used for a particular analysis, becomes complicated by the heterogeneity of the collecting sites yet is critically important to the interpretation and reuse of derived results. This manuscript will present a case study of using the XML-Based Clinical Experiment Data Exchange (XCEDE) schema and the Human Imaging Database (HID) in the Biomedical Informatics Research Network's (BIRN) distributed environment to document and exchange derived data. The discussion includes an overview of the data structures used in both the XML and the database representations, insight into the design considerations, and the extensibility of the design to support additional analysis streams.


The Northwestern University Neuroimaging Data Archive (NUNDA).

  • Kathryn Alpert‎ et al.
  • NeuroImage‎
  • 2016‎

The Northwestern University Neuroimaging Data Archive (NUNDA), an XNAT-powered data archiving system, aims to facilitate secure data storage; centralized data management; automated, standardized data processing; and simple, intuitive data sharing. NUNDA is a federated data archive, wherein individual project owners regulate access to their data. NUNDA supports multiple methods of data import, enabling data collection in a central repository. Data in NUNDA are available by project to any authorized user, allowing coordinated data management and review across sites. With NUNDA pipelines, users capitalize on existing procedures or standardize custom routines for consistent, automated data processing. NUNDA can be integrated with other research databases to simplify data exploration and discovery. And data on NUNDA can be confidently shared for secure collaboration.


Northwestern University Schizophrenia Data and Software Tool (NUSDAST).

  • Lei Wang‎ et al.
  • Frontiers in neuroinformatics‎
  • 2013‎

The schizophrenia research community has invested substantial resources on collecting, managing and sharing large neuroimaging datasets. As part of this effort, our group has collected high resolution magnetic resonance (MR) datasets from individuals with schizophrenia, their non-psychotic siblings, healthy controls and their siblings. This effort has resulted in a growing resource, the Northwestern University Schizophrenia Data and Software Tool (NUSDAST), an NIH-funded data sharing project to stimulate new research. This resource resides on XNAT Central, and it contains neuroimaging (MR scans, landmarks and surface maps for deep subcortical structures, and FreeSurfer cortical parcellation and measurement data), cognitive (cognitive domain scores for crystallized intelligence, working memory, episodic memory, and executive function), clinical (demographic, sibling relationship, SAPS and SANS psychopathology), and genetic (20 polymorphisms) data, collected from more than 450 subjects, most with 2-year longitudinal follow-up. A neuroimaging mapping, analysis and visualization software tool, CAWorks, is also part of this resource. Moreover, in making our existing neuroimaging data along with the associated meta-data and computational tools publically accessible, we have established a web-based information retrieval portal that allows the user to efficiently search the collection. This research-ready dataset meaningfully combines neuroimaging data with other relevant information, and it can be used to help facilitate advancing neuroimaging research. It is our hope that this effort will help to overcome some of the commonly recognized technical barriers in advancing neuroimaging research such as lack of local organization and standard descriptions.


Heritability of fractional anisotropy in human white matter: a comparison of Human Connectome Project and ENIGMA-DTI data.

  • Peter Kochunov‎ et al.
  • NeuroImage‎
  • 2015‎

The degree to which genetic factors influence brain connectivity is beginning to be understood. Large-scale efforts are underway to map the profile of genetic effects in various brain regions. The NIH-funded Human Connectome Project (HCP) is providing data valuable for analyzing the degree of genetic influence underlying brain connectivity revealed by state-of-the-art neuroimaging methods. We calculated the heritability of the fractional anisotropy (FA) measure derived from diffusion tensor imaging (DTI) reconstruction in 481 HCP subjects (194/287 M/F) consisting of 57/60 pairs of mono- and dizygotic twins, and 246 siblings. FA measurements were derived using (Enhancing NeuroImaging Genetics through Meta-Analysis) ENIGMA DTI protocols and heritability estimates were calculated using the SOLAR-Eclipse imaging genetic analysis package. We compared heritability estimates derived from HCP data to those publicly available through the ENIGMA-DTI consortium, which were pooled together from five-family based studies across the US, Europe, and Australia. FA measurements from the HCP cohort for eleven major white matter tracts were highly heritable (h(2)=0.53-0.90, p<10(-5)), and were significantly correlated with the joint-analytical estimates from the ENIGMA cohort on the tract and voxel-wise levels. The similarity in regional heritability suggests that the additive genetic contribution to white matter microstructure is consistent across populations and imaging acquisition parameters. It also suggests that the overarching genetic influence provides an opportunity to define a common genetic search space for future gene-discovery studies. Uniquely, the measurements of additive genetic contribution performed in this study can be repeated using online genetic analysis tools provided by the HCP ConnectomeDB web application.


Federated learning enables big data for rare cancer boundary detection.

  • Sarthak Pati‎ et al.
  • Nature communications‎
  • 2022‎

Although machine learning (ML) has shown promise across disciplines, out-of-sample generalizability is concerning. This is currently addressed by sharing multi-site data, but such centralization is challenging/infeasible to scale due to various limitations. Federated ML (FL) provides an alternative paradigm for accurate and generalizable ML, by only sharing numerical model updates. Here we present the largest FL study to-date, involving data from 71 sites across 6 continents, to generate an automatic tumor boundary detector for the rare disease of glioblastoma, reporting the largest such dataset in the literature (n = 6, 314). We demonstrate a 33% delineation improvement for the surgically targetable tumor, and 23% for the complete tumor extent, over a publicly trained model. We anticipate our study to: 1) enable more healthcare studies informed by large diverse data, ensuring meaningful results for rare diseases and underrepresented populations, 2) facilitate further analyses for glioblastoma by releasing our consensus model, and 3) demonstrate the FL effectiveness at such scale and task-complexity as a paradigm shift for multi-site collaborations, alleviating the need for data-sharing.


Brain extraction on MRI scans in presence of diffuse glioma: Multi-institutional performance evaluation of deep learning methods and robust modality-agnostic training.

  • Siddhesh Thakur‎ et al.
  • NeuroImage‎
  • 2020‎

Brain extraction, or skull-stripping, is an essential pre-processing step in neuro-imaging that has a direct impact on the quality of all subsequent processing and analyses steps. It is also a key requirement in multi-institutional collaborations to comply with privacy-preserving regulations. Existing automated methods, including Deep Learning (DL) based methods that have obtained state-of-the-art results in recent years, have primarily targeted brain extraction without considering pathologically-affected brains. Accordingly, they perform sub-optimally when applied on magnetic resonance imaging (MRI) brain scans with apparent pathologies such as brain tumors. Furthermore, existing methods focus on using only T1-weighted MRI scans, even though multi-parametric MRI (mpMRI) scans are routinely acquired for patients with suspected brain tumors. In this study, we present a comprehensive performance evaluation of recent deep learning architectures for brain extraction, training models on mpMRI scans of pathologically-affected brains, with a particular focus on seeking a practically-applicable, low computational footprint approach, generalizable across multiple institutions, further facilitating collaborations. We identified a large retrospective multi-institutional dataset of n=3340 mpMRI brain tumor scans, with manually-inspected and approved gold-standard segmentations, acquired during standard clinical practice under varying acquisition protocols, both from private institutional data and public (TCIA) collections. To facilitate optimal utilization of rich mpMRI data, we further introduce and evaluate a novel ''modality-agnostic training'' technique that can be applied using any available modality, without need for model retraining. Our results indicate that the modality-agnostic approach1 obtains accurate results, providing a generic and practical tool for brain extraction on scans with brain tumors.


Cerebrospinal fluid tau and ptau(181) increase with cortical amyloid deposition in cognitively normal individuals: implications for future clinical trials of Alzheimer's disease.

  • Anne M Fagan‎ et al.
  • EMBO molecular medicine‎
  • 2009‎

Alzheimer's disease (AD) pathology is estimated to develop many years before detectable cognitive decline. Fluid and imaging biomarkers may identify people in early symptomatic and even preclinical stages, possibly when potential treatments can best preserve cognitive function. We previously reported that cerebrospinal fluid (CSF) levels of amyloid-beta(42) (Abeta(42)) serve as an excellent marker for brain amyloid as detected by the amyloid tracer, Pittsburgh compound B (PIB). Using data from 189 cognitively normal participants, we now report a positive linear relationship between CSF tau/ptau(181) (primary constituents of neurofibrillary tangles) with the amount of cortical amyloid. We observe a strong inverse relationship of cortical PIB binding with CSF Abeta(42) but not for plasma Abeta species. Some individuals have low CSF Abeta(42) but no cortical PIB binding. Together, these data suggest that changes in brain Abeta(42) metabolism and amyloid formation are early pathogenic events in AD, and that significant disruptions in CSF tau metabolism likely occur after Abeta(42) initially aggregates and increases as amyloid accumulates. These findings have important implications for preclinical AD diagnosis and treatment.


A unified approach for morphometric and functional data analysis in young, old, and demented adults using automated atlas-based head size normalization: reliability and validation against manual measurement of total intracranial volume.

  • Randy L Buckner‎ et al.
  • NeuroImage‎
  • 2004‎

Atlas normalization, as commonly used by functional data analysis, provides an automated solution to the widely encountered problem of correcting for head size variation in regional and whole-brain morphometric analyses, so long as an age- and population-appropriate target atlas is used. In the present article, we develop and validate an atlas normalization procedure for head size correction using manual total intracranial volume (TIV) measurement as a reference. The target image used for atlas transformation consisted of a merged young and old-adult template specifically created for cross age-span normalization. Automated atlas transformation generated the Atlas Scaling Factor (ASF) defined as the volume-scaling factor required to match each individual to the atlas target. Because atlas normalization equates head size, the ASF should be proportional to TIV. A validation analysis was performed on 147 subjects to evaluate ASF as a proxy for manual TIV measurement. In addition, 19 subjects were imaged on multiple days to assess test-retest reliability. Results indicated that the ASF was (1) equivalent to manual TIV normalization (r = 0.93), (2) reliable across multiple imaging sessions (r = 1.00; mean absolute percentage of difference = 0.51%), (3) able to connect between-gender head size differences, and (4) minimally biased in demented older adults with marked atrophy. Hippocampal volume differences between nondemented (n = 49) and demented (n = 50) older adults (measured manually) were equivalent whether corrected using manual TIV or automated ASF (effect sizes of 1.29 and 1.46, respectively). To provide normative values, ASF was used to automatically derive estimated TIV (eTIV) in 335 subjects aged 15-96 including both clinically characterized nondemented (n = 77) and demented (n = 90) older adults. Differences in eTIV between nondemented and demented groups were negligible, thus failing to support the hypothesis that large premorbid brain size moderates Alzheimer's disease. Gender was the only robust factor that influenced eTIV. Men showed an approximately approximately 12% larger eTIV than women. These results demonstrate that atlas normalization using appropriate template images provides a robust, automated method for head size correction that is equivalent to manual TIV correction in studies of aging and dementia. Thus, atlas normalization provides a common framework for both morphometric and functional data analysis.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: