This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.
Software is as integral as a research paper, monograph, or dataset in terms of facilitating the full understanding and dissemination of research. This article provides broadly applicable guidance on software citation for the communities and institutions publishing academic journals and conference proceedings. We expect those communities and institutions to produce versions of this document with software examples and citation styles that are appropriate for their intended audience. This article (and those community-specific versions) are aimed at authors citing software, including software developed by the authors or by others. We also include brief instructions on how software can be made citable, directing readers to more comprehensive guidance published elsewhere. The guidance presented in this article helps to support proper attribution and credit, reproducibility, collaboration and reuse, and encourages building on the work of others to further research.
Few technologies are more widespread in modern biological laboratories than imaging. Recent advances in optical technologies and instrumentation are providing hitherto unimagined capabilities. Almost all these advances have required the development of software to enable the acquisition, management, analysis and visualization of the imaging data. We review each computational step that biologists encounter when dealing with digital images, the inherent challenges and the overall status of available software for bioimage informatics, focusing on open-source options.
Since its start in 1998, Software Carpentry has evolved from a week-long training course at the US national laboratories into a worldwide volunteer effort to improve researchers' computing skills. This paper explains what we have learned along the way, the challenges we now face, and our plans for the future.
The vast majority of studies into visual processing are conducted using computer display technology. The current paper describes a new free suite of software tools designed to make this task easier, using the latest advances in hardware and software. PsychoPy is a platform-independent experimental control system written in the Python interpreted language using entirely free libraries. PsychoPy scripts are designed to be extremely easy to read and write, while retaining complete power for the user to customize the stimuli and environment. Tools are provided within the package to allow everything from stimulus presentation and response collection (from a wide range of devices) to simple data analysis such as psychometric function fitting. Most importantly, PsychoPy is highly extensible and the whole system can evolve via user contributions. If a user wants to add support for a particular stimulus, analysis or hardware device they can look at the code for existing examples, modify them and submit the modifications back into the package so that the whole community benefits.
The small program Illustrate generates non-photorealistic images of biological molecules for use in dissemination, outreach, and education. The method has been used as part of the "Molecule of the Month," an ongoing educational column at the RCSB Protein Data Bank (http://rcsb.org). Insights from 20 years of application of the program are presented, and the program has been released both as open-source Fortran at GitHub and through an interactive web-based interface.
This paper provides a brief overview of software currently available for the genetic analysis of quantitative traits in humans. Programs that implement variance components, Markov Chain Monte Carlo (MCMC), Haseman-Elston (H-E) and penetrance model-based linkage analyses are discussed, as are programs for measured genotype association analyses and quantitative trait transmission disequilibrium tests. The software compared includes LINKAGE, FASTLINK, PAP, SOLAR, SEGPATH, ACT, Mx, MERLIN, GENEHUNTER, Loki, Mendel, SAGE, QTDT and FBAT. Where possible, the paper provides URLs for acquiring these programs through the internet, details of the platforms for which the software is available and the types of analyses performed.
Next-generation sequencing projects have underappreciated information management tasks requiring detailed attention to specimen curation, nucleic acid sample preparation and sequence production methods required for downstream data processing, comparison, interpretation, sharing and reuse. The few existing metadata management tools for genome-based studies provide weak curatorial frameworks for experimentalists to store and manage idiosyncratic, project-specific information, typically offering no automation supporting unified naming and numbering conventions for sequencing production environments that routinely deal with hundreds, if not thousands of samples at a time. Moreover, existing tools are not readily interfaced with bioinformatics executables, (e.g., BLAST, Bowtie2, custom pipelines). Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and perform analyses and information management tasks via an intuitive web-based interface. Several use cases with short-read sequence datasets are provided to validate installation and integrated function, and suggest possible methodological road maps for prospective users. Provided examples highlight possible OMMS workflows for metadata curation, multistep analyses, and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for webbased deployment supporting geographically-dispersed projects. The OMMS was developed using an open-source software base, is flexible, extensible and easily installed and executed. The OMMS can be obtained at http://omms.sandia.gov.
Caret software is widely used for analyzing and visualizing many types of fMRI data, often in conjunction with experimental data from other modalities. This article places Caret's development in a historical context that spans three decades of brain mapping--from the early days of manually generated flat maps to the nascent field of human connectomics. It also highlights some of Caret's distinctive capabilities. This includes the ease of visualizing data on surfaces and/or volumes and on atlases as well as individual subjects. Caret can display many types of experimental data using various combinations of overlays (e.g., fMRI activation maps, cortical parcellations, areal boundaries), and it has other features that facilitate the analysis and visualization of complex neuroimaging datasets.
Genotype imputation for single nucleotide polymorphisms (SNPs) has been shown to be a powerful means to include genetic markers in exploratory genetic association studies without having to genotype them, and is becoming a standard procedure. A number of different software programs are available. In our experience, user-friendliness is often the deciding factor in the choice of software to solve a particular task. We therefore evaluated the usability of three publicly available imputation programs: BEAGLE, IMPUTE and MACH. We found all three programs to perform well with HapMap reference data, with little effort needed for data preparation and subsequent association analysis. Each of them has different strengths and weaknesses, however, and none is optimal for all situations.
Research software is often developed with expedience as a core development objective because experimental results, but not the software, are specified and resourced as a project output. While such code can help find answers to specific research questions, it may lack longevity and flexibility to make it reusable. We reimplemented BoneJ, our software for skeletal biology image analysis, to address design limitations that put it at risk of becoming unusable. We improved the quality of BoneJ code by following contemporary best programming practices. These include separation of concerns, dependency management, thorough testing, continuous integration and deployment, source code management, code reviews, issue and task ticketing, and user and developer documentation. The resulting BoneJ2 represents a generational shift in development technology and integrates with the ImageJ2 plugin ecosystem.
High-quality clinical research is dependent on adequate design, methodology, and data collection. The utilization of electronic data capture (EDC) systems is recommended to optimize research data through proper management. This paper's objective is to present the procedures of REDCap (Research Electronic Data Capture), which supports research development, and to promote the utilization of this software among the scientific community.
Computational biology provides software tools for testing and making inferences about biological data. In the face of increasing volumes of data, heuristic methods that trade software speed for accuracy may be employed. We have studied these trade-offs using the results of a large number of independent software benchmarks, and evaluated whether external factors, including speed, author reputation, journal impact, recency and developer efforts, are indicative of accurate software.
The precision of Hologic Apex v2.0 analysis software is significantly improved from Hologic Delphi v11.2 software and is comparable to GE Lunar Prodigy v7.5 software. Apex and Delphi precisions were, respectively, 1.0% vs. 1.2% (L1-L4 spine), 1.l % vs. 1.3% (total femur), 1.6% vs. 1.9% (femoral neck), and 0.7% vs. 0.9% (dual total femur).
Elimination of cancer cells by T cells is a critical mechanism of anti-tumor immunity and cancer immunotherapy response. T cells recognize cancer cells by engagement of T cell receptors with peptide epitopes presented by major histocompatibility complex molecules on the cancer cell surface. Peptide epitopes can be derived from antigen proteins coded for by multiple genomic sources. Bioinformatics tools used to identify tumor-specific epitopes via analysis of DNA and RNA-sequencing data have largely focused on epitopes derived from somatic variants, though a smaller number have evaluated potential antigens from other genomic sources.
Most bioinformatics tools available today were not written by professional software developers, but by people that wanted to solve their own problems, using computational solutions and spending the minimum time and effort possible, since these were just the means to an end. Consequently, a vast number of software applications are currently available, hindering the task of identifying the utility and quality of each. At the same time, this situation has hindered regular adoption of these tools in clinical practice. Typically, they are not sufficiently developed to be used by most clinical researchers and practitioners. To address these issues, it is necessary to re-think how biomedical applications are built and adopt new strategies that ensure quality, efficiency, robustness, correctness and reusability of software components. We also need to engage end-users during the development process to ensure that applications fit their needs. In this review, we present a set of guidelines to support biomedical software development, with an explanation of how they can be implemented and what kind of open-source tools can be used for each specific topic.
Woods and Himle developed a standardized tic suppression paradigm (TSP) for the experimental setting, to quantify the effects of intentional tic suppression in Tourette syndrome. The present article describes a Java program that automates record keeping and reward dispensing during the several experimental conditions of the TSP. The software can optionally be connected to a commercial reward token dispenser to further automate reward delivery to the participant. The timing of all tics, 10-second tic-free intervals, and dispensed rewards is recorded in plain text files for later analysis. Expected applications include research on Tourette syndrome and related disorders.
BioArray Software Environment (BASE) is a web-based software package for storing, searching, and analyzing locally generated microarray data and information surrounding microarray production. The workflow begins in sample management and, optionally, microtiter plate tracking and ends in visualization and analysis of entire experiments. The relative ease with which new analysis plug-ins can be added has given rise to a plethora of third-party tools, and the licensing terms (GNU GPL) encourage local modifications of the software. This introduction to BASE describes the basics of working with the software, both in general and in more detail for the various parts. It also provides some hints about more advanced usage and a section on what is needed to set up your own BASE server. The information is current as of BASE version 1.2.17b, which was released on November 6, 2005.
Residual Dipolar Couplings (RDCs) have emerged in the past two decades as an informative source of experimental restraints for the study of structure and dynamics of biological macromolecules and complexes. The REDCAT software package was previously introduced for the analysis of molecular structures using RDC data. Here we report additional features that have been included in this software package in order to expand the scope of its analyses. We first discuss the features that enhance REDCATs user-friendly nature, such as the integration of a number of analyses into one single operation and enabling convenient examination of a structural ensemble in order to identify the most suitable structure. We then describe the new features which expand the scope of RDC analyses, performing exercises that utilize both synthetic and experimental data to illustrate and evaluate different features with regard to structure refinement and structure validation.
Analysis of calcium sparks in cardiomyocytes can provide valuable information about functional changes of calcium handling in health and disease. As a part of the calcium sparks analysis, sparks detection and characterization is necessary. Here, we describe a new open-source platform for automatic calcium sparks detection from line scan confocal images. The developed software is tailored for detecting only calcium sparks, allowing us to design a graphical user interface specifically for this task. The software enables detecting sparks automatically as well as adding, removing, or adjusting regions of interest marking each spark. The results of the analysis are stored in an SQL database, allowing simple integration with statistical tools. We have analyzed the performance of the algorithm using a large set of synthetic images with varying spark sizes and noise levels and also compared the analysis results with results obtained by software established in the field. The use of our software is illustrated by an analysis of the effect of isoprenaline (ISO) on spark frequency, amplitude, and spatial and temporal characteristics. For that, cardiomyocytes from C57BL/6 mice were used. We demonstrated an increase in spark frequency, tendency of having larger spark amplitudes, sparks with a longer duration, and occurrence of multiple sparks from the same site in the presence of ISO. We also show that the duration and the width of sparks with the same amplitude were similar in the absence and presence of ISO. The software was released as an open source repository and is available for free use and collaborative development.
Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.
You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.
If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.
Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:
You can save any searches you perform for quick access to later from here.
We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.
If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.
Here are the facets that you can filter your papers by.
From here we'll present any options for the literature, such as exporting your current results.
If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.
Year:
Count: