This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.
Reproducibility of data analysis workflow is a key issue in the field of bioinformatics. Recent computing technologies, such as virtualization, have made it possible to reproduce workflow execution with ease. However, the reproducibility of results is not well discussed; that is, there is no standard way to verify whether the biological interpretation of reproduced results is the same. Therefore, it still remains a challenge to automatically evaluate the reproducibility of results.
Using the longitudinal Framingham Heart Study data on blood pressure, we analyzed the reproducibility of linkage measures from serial cross-sectional surveys of a defined population by performing genome-wide model-free linkage analyses to systolic blood pressure (SBP) and history of hypertension (HTN) measured at five separate time points.
Epidermal growth factor receptor (EGFR) gene copy number evaluated by fluorescence in situ hybridisation (FISH) can discriminate among KRAS wild-type patients those with better outcome to EGFR-targeted therapy in metastatic colorectal cancer, further enhancing selection of patients. Nevertheless, enumeration of gene copies is challenging and the lack of analytical standardisation has limited incorporation of the test into the clinical practice. We therefore assessed EGFR FISH interlaboratory consensus among five molecular diagnostic reference centres.
Repeatability and reproducibility of magnetization transfer magnetic resonance imaging of the breast, and the ability of this technique to assess the response of locally advanced breast cancer to neoadjuvant therapy (NAT), are determined. Reproducibility scans at 3 different 3 T scanners, including 2 scanners in community imaging centers, found a 16.3% difference (n = 3) in magnetization transfer ratio (MTR) in healthy breast fibroglandular tissue. Repeatability scans (n = 10) found a difference of ∼8.1% in the MTR measurement of fibroglandular tissue between the 2 measurements. Thus, MTR is repeatable and reproducible in the breast and can be integrated into community imaging clinics. Serial magnetization transfer magnetic resonance imaging performed at longitudinal time points during NAT indicated no significant change in average tumoral MTR during treatment. However, histogram analysis indicated an increase in the dispersion of MTR values of the tumor during NAT, as quantified by higher standard deviation (P = .005), higher full width at half maximum (P = .02), and lower kurtosis (P = .02). Patients' stratification into those with pathological complete response (pCR; n = 6) at the conclusion of NAT and those with residual disease (n = 9) showed wider distribution of tumor MTR values in patients who achieved pCR after 2-4 cycles of NAT, as quantified by higher standard deviation (P = .02), higher full width at half maximum (P = .03), and lower kurtosis (P = .03). Thus, MTR can be used as an imaging metric to assess response to breast NAT.
Continuous glucose monitoring systems (CGM) are a very useful tool to understand the behaviour of glucose in different situations and populations. Despite the widespread use of CGM systems in both clinical practice and research, our understanding of the reproducibility of CGM data remains limited. The present work examines the reproducibility of the results provided by a CGM system in a random sample of a free-living adult population, from a functional data analysis approach. Functional intraclass correlation coefficients (ICCs) and their 95% confidence intervals (CI) were calculated to assess the reproducibility of CGM results in 581 individuals. 62% were females 581 participants (62% women) mean age 48 years (range 18-87) were included, 12% had previously been diagnosed with diabetes. The inter-day reproducibility of the CGM results was greater for subjects with diabetes (ICC 0.46 [CI 0.39-0.55]) than for normoglycaemic subjects (ICC 0.30 [CI 0.27-0.33]); the value for prediabetic subjects was intermediate (ICC 0.37 [CI 0.31-0.42]). For normoglycaemic subjects, inter-day reproducibility was poorer among the younger (ICC 0.26 [CI 0.21-0.30]) than the older subjects (ICC 0.39 [CI 0.32-0.45]). Inter-day reproducibility was poorest among normoglycaemic subjects, especially younger normoglycaemic subjects, suggesting the need to monitor some patient groups more often than others.
Proactive identification of chemicals with skin sensitizing properties is a key toxicological endpoint within chemical safety assessment, as required by legislation for registration of chemicals. In order to meet demands of increased animal welfare and facilitate increased testing efficiency also in nonregulatory settings, considerable efforts have been made to develop nonanimal approaches to replace current animal testing. Genomic Allergen Rapid Detection (GARD™) is a state-of-the-art technology platform, the most advanced application of which is the assay for assessment of skin sensitizing chemicals, GARD™skin. The methodology is based on a dendritic cell (DC)-like cell line, thus mimicking the mechanistic events leading to initiation and modulation of downstream immunological responses. Induced transcriptional changes are measured following exposure to test chemicals, providing a detailed evaluation of cell activation. These changes are associated with the immunological decision-making role of DCs in vivo and include among other phenotypic modifications, up-regulation of co-stimulatory molecules, induction of cellular and oxidative stress pathways and xenobiotic responses, and provide a holistic readout of substance-induced DC activation. Here, results from an inter-laboratory ring trial of GARD™skin, conducted in compliance with OECD guidance documents and comprising a blinded chemical test set of 28 chemicals, are summarized. The assay was found to be transferable to naïve laboratories, with an inter-laboratory reproducibility of 92.0%. The within-laboratory reproducibility ranged between 82.1% and 88.9%, whereas the cumulative predictive accuracy across the 3 laboratories was 93.8%. It was concluded that GARD™skin is a robust and reliable method for the identification of skin sensitizing chemicals and suitable for stand-alone use or as a constituent of integrated testing. These data form the basis for the regulatory validation of GARD™skin.
Repeated runs of the same program can generate different molecular phylogenies from identical data sets under the same analytical conditions. This lack of reproducibility of inferred phylogenies casts a long shadow on downstream research employing these phylogenies in areas such as comparative genomics, systematics, and functional biology. We have assessed the relative accuracies and log-likelihoods of alternative phylogenies generated for computer-simulated and empirical data sets. Our findings indicate that these alternative phylogenies reconstruct evolutionary relationships with comparable accuracy. They also have similar log-likelihoods that are not inferior to the log-likelihoods of the true tree. We determined that the direct relationship between irreproducibility and inaccuracy is due to their common dependence on the amount of phylogenetic information in the data. While computational reproducibility can be enhanced through more extensive heuristic searches for the maximum likelihood tree, this does not lead to higher accuracy. We conclude that computational irreproducibility plays a minor role in molecular phylogenetics.
Proactive identification and characterization of hazards attributable to chemicals are central aspects of risk assessments. Current legislations and trends in predictive toxicology advocate a transition from in vivo methods to nonanimal alternatives. For skin sensitization assessment, several OECD validated alternatives exist for hazard identification, but nonanimal methods capable of accurately characterizing the risks associated with sensitizing potency are still lacking. The GARD (Genomic Allergen Rapid Detection) platform utilizes exposure-induced gene expression profiles of a dendritic-like cell line in combination with machine learning to provide hazard classifications for different immunotoxicity endpoints. Recently, a novel genomic biomarker signature displaying promising potency-associated discrimination between weak and strong skin sensitizers was proposed. Here, we present the adaptation of the defined biomarker signature on a gene expression analysis platform suited for routine acquisition, confirm the validity of the proposed biomarkers, and define the GARDpotency assay for prediction of skin sensitizer potency. The performance of GARDpotency was validated in a blinded ring trial, in accordance with OECD guidance documents. The cumulative accuracy was estimated to 88.0% across 3 laboratories and 9 independent experiments. The within-laboratory reproducibility measures ranged between 62.5% and 88.9%, and the between-laboratory reproducibility was estimated to 61.1%. Currently, no direct or systematic cause for the observed inconsistencies between the laboratories has been identified. Further investigations into the sources of introduced variability will potentially allow for increased reproducibility. In conclusion, the in vitro GARDpotency assay constitutes a step forward for development of nonanimal alternatives for hazard characterization of skin sensitizers.
With the development of novel assay technologies, biomedical experiments and analyses have gone through substantial evolution. Today, a typical experiment can simultaneously measure hundreds to thousands of individual features (e.g. genes) in dozens of biological conditions, resulting in gigabytes of data that need to be processed and analyzed. Because of the multiple steps involved in the data generation and analysis and the lack of details provided, it can be difficult for independent researchers to try to reproduce a published study. With the recent outrage following the halt of a cancer clinical trial due to the lack of reproducibility of the published study, researchers are now facing heavy pressure to ensure that their results are reproducible. Despite the global demand, too many published studies remain non-reproducible mainly due to the lack of availability of experimental protocol, data and/or computer code. Scientific discovery is an iterative process, where a published study generates new knowledge and data, resulting in new follow-up studies or clinical trials based on these results. As such, it is important for the results of a study to be quickly confirmed or discarded to avoid wasting time and money on novel projects. The availability of high-quality, reproducible data will also lead to more powerful analyses (or meta-analyses) where multiple data sets are combined to generate new knowledge. In this article, we review some of the recent developments regarding biomedical reproducibility and comparability and discuss some of the areas where the overall field could be improved.
Dynamic MR biomarkers (T2*-weighted or susceptibility-based and T1-weighted or relaxivity-enhanced) have been applied to assess tumor perfusion and its response to therapies. A significant challenge in the development of reliable biomarkers is a rigorous assessment and optimization of reproducibility. The purpose of this study was to determine the measurement reproducibility of T1-weighted dynamic contrast-enhanced (DCE)-MRI and T2*-weighted dynamic susceptibility contrast (DSC)-MRI with two contrast agents (CA) of different molecular weight (MW): gadopentetate (Gd-DTPA, 0.5 kDa) and Gadomelitol (P792, 6.5 kDa). Each contrast agent was tested with eight mice that had subcutaneous MDA-MB-231 breast xenograft tumors. Each mouse was imaged with a combined DSC-DCE protocol three times within one week to achieve measures of reproducibility. DSC-MRI results were evaluated with a contrast to noise ratio (CNR) efficiency threshold. There was a clear signal drop (>95% probability threshold) in the DSC of normal tissue, while signal changes were minimal or non-existent (<95% probability threshold) in tumors. Mean within-subject coefficient of variation (wCV) of relative blood volume (rBV) in normal tissue was 11.78% for Gd-DTPA and 6.64% for P792. The intra-class correlation coefficient (ICC) of rBV in normal tissue was 0.940 for Gd-DTPA and 0.978 for P792. The inter-subject correlation coefficient was 0.092. Calculated K(trans) from DCE-MRI showed comparable reproducibility (mean wCV, 5.13% for Gd-DTPA, 8.06% for P792). ICC of K(trans) showed high intra-subject reproducibility (ICC = 0.999/0.995) and inter-subject heterogeneity (ICC = 0.774). Histograms of K(trans) distributions for three measurements had high degrees of overlap (sum of difference of the normalized histograms <0.01). These results represent homogeneous intra-subject measurement and heterogeneous inter-subject character of biological population, suggesting that perfusion MRI could be an imaging biomarker to monitor or predict response of disease.
Repeatability of study setups and reproducibility of research results by underlying data are major requirements in science. Until now, abstract models for describing the structural logic of studies in environmental sciences are lacking and tools for data management are insufficient. Mandatory for repeatability and reproducibility is the use of sophisticated data management solutions going beyond data file sharing. Particularly, it implies maintenance of coherent data along workflows. Design data concern elements from elementary domains of operations being transformation, measurement and transaction. Operation design elements and method information are specified for each consecutive workflow segment from field to laboratory campaigns. The strict linkage of operation design element values, operation values and objects is essential. For enabling coherence of corresponding objects along consecutive workflow segments, the assignment of unique identifiers and the specification of their relations are mandatory. The abstract model presented here addresses these aspects, and the software DiversityDescriptions (DWB-DD) facilitates the management of thusly connected digital data objects and structures. DWB-DD allows for an individual specification of operation design elements and their linking to objects. Two workflow design use cases, one for DNA barcoding and another for cultivation of fungal isolates, are given. To publish those structured data, standard schema mapping and XML-provision of digital objects are essential. Schemas useful for this mapping include the Ecological Markup Language, the Schema for Meta-omics Data of Collection Objects and the Standard for Structured Descriptive Data. Data pipelines with DWB-DD include the mapping and conversion between schemas and functions for data publishing and archiving according to the Open Archival Information System standard. The setting allows for repeatability of study setups, reproducibility of study results and for supporting work groups to structure and maintain their data from the beginning of a study. The theory of 'FAIR++' digital objects is introduced.
Reproducibility of scientific results is a key element of science and credibility. The lack of reproducibility across many scientific fields has emerged as an important concern. In this piece, we assess mathematical model reproducibility and propose a scorecard for improving reproducibility in this field.
Data are the foundation of empirical research, yet all too often the datasets underlying published papers are unavailable, incorrect, or poorly curated. This is a serious issue, because future researchers are then unable to validate published results or reuse data to explore new ideas and hypotheses. Even if data files are securely stored and accessible, they must also be accompanied by accurate labels and identifiers. To assess how often problems with metadata or data curation affect the reproducibility of published results, we attempted to reproduce Discriminant Function Analyses (DFAs) from the field of organismal biology. DFA is a commonly used statistical analysis that has changed little since its inception almost eight decades ago, and therefore provides an opportunity to test reproducibility among datasets of varying ages. Out of 100 papers we initially surveyed, fourteen were excluded because they did not present the common types of quantitative result from their DFA or gave insufficient details of their DFA. Of the remaining 86 datasets, there were 15 cases for which we were unable to confidently relate the dataset we received to the one used in the published analysis. The reasons ranged from incomprehensible or absent variable labels, the DFA being performed on an unspecified subset of the data, or the dataset we received being incomplete. We focused on reproducing three common summary statistics from DFAs: the percent variance explained, the percentage correctly assigned and the largest discriminant function coefficient. The reproducibility of the first two was fairly high (20 of 26, and 44 of 60 datasets, respectively), whereas our success rate with the discriminant function coefficients was lower (15 of 26 datasets). When considering all three summary statistics, we were able to completely reproduce 46 (65%) of 71 datasets. While our results show that a majority of studies are reproducible, they highlight the fact that many studies still are not the carefully curated research that the scientific community and public expects.
An External Quality Assessment (EQA) program was developed to investigate the status of estrogen receptor (ER), progesterone receptor (PR), and Ki-67 immunohistochemical (IHC) detection in breast cancer and to evaluate the reproducibility of staining and interpretation in 44 pathology laboratories in China.
We describe a project-based introduction to reproducible and collaborative neuroimaging analysis. Traditional teaching on neuroimaging usually consists of a series of lectures that emphasize the big picture rather than the foundations on which the techniques are based. The lectures are often paired with practical workshops in which students run imaging analyses using the graphical interface of specific neuroimaging software packages. Our experience suggests that this combination leaves the student with a superficial understanding of the underlying ideas, and an informal, inefficient, and inaccurate approach to analysis. To address these problems, we based our course around a substantial open-ended group project. This allowed us to teach: (a) computational tools to ensure computationally reproducible work, such as the Unix command line, structured code, version control, automated testing, and code review and (b) a clear understanding of the statistical techniques used for a basic analysis of a single run in an MR scanner. The emphasis we put on the group project showed the importance of standard computational tools for accuracy, efficiency, and collaboration. The projects were broadly successful in engaging students in working reproducibly on real scientific questions. We propose that a course on this model should be the foundation for future programs in neuroimaging. We believe it will also serve as a model for teaching efficient and reproducible research in other fields of computational science.
Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed.
Cerebrovascular reactivity (CVR) is defined as the ratio of the cerebral blood flow (CBF) response to an increase in a vasoactive stimulus. We used changes in blood oxygenation level-dependent (BOLD) MRI as surrogates for changes of CBF, and standardized quantitative changes in arterial partial pressure of carbon dioxide as the stimulus. Despite uniform stimulus and test conditions, differences in voxel-wise BOLD changes between testing sites may remain, attributable to physiologic and machine variability. We generated a reference atlas of normal CVR metrics (voxel-wise mean and SD) for each of two sites. We hypothesized that there would be no significant differences in CVR between the two atlases enabling each atlas to be used at any site. A total of 69 healthy subjects were tested to create site-specific atlases, with 20 of those individuals tested at both sites. 38 subjects were scanned at Site 1 (17F, 37.5 ± 16.8 y) and 51 subjects were tested at Site 2 (22F, 40.9 ± 17.4 y). MRI platforms were: Site 1, 3T Magnetom Skyra Siemens scanner with 20-channel head and neck coil; and Site 2, 3T HDx Signa GE scanner with 8-channel head coil. To construct the atlases, test results of individual subjects were co-registered into a standard space and voxel-wise mean and SD CVR metrics were calculated. Map comparisons of z scores found no significant differences between white matter or gray matter in the 20 subjects scanned at both sites when analyzed with either atlas. We conclude that individual CVR testing, and atlas generation are compatible across sites provided that standardized respiratory stimuli and BOLD MRI scan parameters are used. This enables the use of a single atlas to score the normality of CVR metrics across multiple sites.
The aim of the study was to investigate the reproducibility of the peritoneal equilibration test (PET) for evaluating changes in peritoneal function in a rat model of peritoneal dialysis. A PET with 4.25% Dianeal was performed twice within 48 hours under similar conditions after a catheter insertion (n = 10). No significant differences were found between the D/P ratios of PET 1 and PET 2 for urea nitrogen, creatinine, total protein, and D/D0 ratio for glucose. The results for PET 1 and PET 2 showed highly significant correlations. This study indicates, when carried out under similar conditions, that the PET is a highly reproducible test and can be used to evaluate the function of the peritoneum during peritoneal dialysis in rats.
Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.
You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.
If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.
Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:
You can save any searches you perform for quick access to later from here.
We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.
If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.
Here are the facets that you can filter your papers by.
From here we'll present any options for the literature, such as exporting your current results.
If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.
Year:
Count: