Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

SciCrunch Registry is a curated repository of scientific resources, with a focus on biomedical resources, including tools, databases, and core facilities - visit SciCrunch to register your resource.

Search

Type in a keyword to search

On page 1 showing 1 ~ 20 out of 1,423 results
Snippet view Table view Download
Click the to add this resource to a Collection
  • RRID:SCR_008531

    This resource has 1+ mentions.

http://neurogenetics.nia.nih.gov

A suite of web-based open source software programs for clinical and genetic study. The aims of this software development in the Laboratory of Neurogenetics, NIA, NIH are * Build retrievable clinical data repository * Set up genetic data bank * Eliminate redundant data entries * Alleviate experimental error due to sample mix-up and genotyping error. * Facilitate clinical and genetic data integration. * Automate data analysis pipelines * Facilitate data mining for genetic as well as environmental factors associated with a disease * Provide an uniformed data acquisition framework, regardless the type of a given disease * Accommodate the heterogeneity of different studies * Manage data flow, storage and access * Ensure patient privacy and data confidentiality/security. The GERON suite consists of several self contained and yet extensible modules. Currently implemented modules are GERON Clinical, Genotyping, and Tracking. More modules are planned to be added into the suite, in order to keep up with the dynamics of the research field. Each module can be used separately or together with others into a seamless pipeline. With each module special attention has been given in order to remain free and open to the academic/government user.

Proper citation: GERON (RRID:SCR_008531) Copy   


https://www.nitrc.org/projects/gmac_2012/

Open-source software toolbox implemented multivariate spectral Granger Causality Analysis for studying brain connectivity using fMRI data. Available features are: fMRI data importing, network nodes definition, time series preprocessing, multivariate autoregressive modeling, spectral Granger causality indexes estimation, statistical significance assessment using surrogate data, network analysis and visualization of connectivity results. All functions are integrated into a graphical user interface developed in Matlab environment. Dependencies: Matlab, BIOSIG, SPM, MarsBar.

Proper citation: GMAC: A Matlab toolbox for spectral Granger causality analysis of fMRI data (RRID:SCR_009581) Copy   


http://www.thomaskoenig.ch/Lester/ibaspm.htm

The aim of this work is to present a toolbox for structure segmentation of structural MRI images. All programs were developed in MATLAB based on a widely used fMRI, MRI software package, SPM99, SPM2, SPM5 (Wellcome Department of Cognitive Neurology, London, UK). Other previous works have developed a similar strategy for obtaining the segmentation of individual MRI image into different anatomical structures using a standardized Atlas. Have to be mentioned the one introduced by Montreal Neurological Institute (MNI) that merges the information coming from ANIMAL (algorithm that deforms one image (nonlinear registration) to match previously labelled) and INSECT (Cerebral Tissue Classification) programs for obtaining a suitable gross cortical structure segmentation (Collins et al, 1999). Here both, nonlinear registration and gray matter segmentation processes have been performed through SPM99, SPM2, SPM5 subroutines. Three principal elements for the labeling process are used: gray matter segmentation, normalization transform matrix (that maps voxels from individual space to standardized one) and MaxPro MNI Atlas. All three are combined to yield a good performance in segmenting gross cortical structures. The programs here can be used in general for any standardized Atlas and any MRI image modality. System Requirements: 1. The IBASPM graphical user interface (GUI) runs only under MATLAB 7.0 or higher. The non-graphical version runs under MATLAB 6.5 or higher. 2. Statistical Parametrical Mapping Software SPM2, SPM5 Main Functions: * Atlasing: Main function ( This file contains spm_select script from SPM5 toolbox and uigetdir script from MATLAB 7.0 ). * Auto_Labeling : Computes individual atlas. * Create_SPAMs : Constructs Statistical Probability Anatomy Maps (SPAMs). * Create_MaxProb : Creates Maximum Probability Atlas (MaxPro) using the SPAMs previously computed. * All_Brain_Vol : Computes whole brain volume masking the brain using the segmentation files (if the segmentation files does not exist it segments). * Struct_Vol : Computes the volume for different structures based on individual Atlas previously obtained by the atlasing process. * Vols_Stats : Computes mean and standard deviation for each structure in a group of individual atlases.

Proper citation: IBASPM: Individual Brain Atlases using Statistical Parametric Mapping Software (RRID:SCR_007110) Copy   


  • RRID:SCR_008682

    This resource has 1+ mentions.

http://www.fetk.org/

The Finite Element ToolKit (FETK) is a collaboratively developed, evolving collection of adaptive finite element method (AFEM) software libraries and tools for solving coupled systems of nonlinear geometric partial differential equations (PDE). The FETK libraries and tools are written in an object-oriented form of ANSI-C and in C , and include a common portability layer (MALOC) for all of FETK, a collection of standard numerical libraries (PUNC), a stand-alone high-quality surface and volume simplex mesh generator (GAMer), a stand-alone networked polygon display tool (SG), a general nonlinear finite element modeling kernel (MC), and a MATLAB toolkit (MCLite) for protyping finite element methods and examining simplex meshes using MATLAB. The entire FETK Suite of tools is highly portable (from iPhone to Blue Gene/L), thanks to use of a small abstraction layer (MALOC) and heavy use of the GNU Autoconf infrastructure. The primary FETK ANSI-C software libraries are: :- MALOC is a Minimal Abstraction Layer for Object-oriented C/C programs. :- PUNC is Portable Understructure for Numerical Computing (requires MALOC). :- GAMer is a Geometry-preserving Adaptive MeshER (requires MALOC). :- SG is a Socket Graphics tool for displaying polygons (requires MALOC). :- MC is a 2D/3D AFEM code for nonlinear geometric PDE (requires MALOC; optionally uses PUNC GAMER SG). Application-specific software designed for use with the FETK software libraries is: :- GPDE is a Geometric Partial Differential Equation solver (requires MALOC PUNC MC; optionally uses GAMER SG). :- APBS is an Adaptive Poisson-Boltzmann Equation Solver (requires MALOC PUNC MC; optionally uses GAMER SG). :- SMOL is a Smoluchowki Equation Solver solver (requires MALOC PUNC MC; optionally uses GAMER SG). MATLAB toolkits designed for use with MC and SG or as standalone packages: :- MCLite is a simple 2D MATLAB version of MC designed for teaching. :- FEtkLAB is a sophisticated 2D MATLAB adaptive PDE solver built on top of MCLite. Related packages developed and maintained by FETK developers (included in PUNC above): :- PMG is a Parallel Algebraic MultiGrid code for general semilinear elliptic equatons. :- CgCode is a package of Conjugate gradient Codes for large sparse linear systems. Sponsors: This resource is developed and supported by the MCP Research Group at the UCSD Center for Computational Mathematics.

Proper citation: Finite Element Toolkit (RRID:SCR_008682) Copy   


  • RRID:SCR_006417

    This resource has 50+ mentions.

http://www.bcgsc.ca/platform/bioinfo/software/alea

A computational software toolbox for allele-specific (AS) epigenomics analysis. It incorporates allelic variation data within existing resources, allowing for the identification of significant associations between epigenetic modifications and specific allelic variants in human and mouse cells. It provides a customizable pipeline of command line tools for AS analysis of next-generation sequencing data (ChIP-seq, RNA-seq, etc.) that takes the raw sequencing data and produces separate allelic tracks ready to be viewed on genome browsers. ALEA takes advantage of the available genomic resources for human (The 1000 Genomes Project Consortium) and mouse (The Mouse Genome Project) to reconstruct diploid in-silico genomes for human or hybrid mice under study. Then, for each accompanying ChIP-seq or RNA-seq dataset, it generates two Wiggle track format (WIG) files from short reads aligned differentially to each haplotype.

Proper citation: ALEA (RRID:SCR_006417) Copy   


  • RRID:SCR_009489

    This resource has 10+ mentions.

http://www.nitrc.org/projects/gppi/

An automated toolbox for a generalized form of psychophysiological interactions for SPM and FSFAST. The automated toolbox can do the following: (a1) produce identical results to the current implementation in SPM (a2) use the current implementation of PPI in SPM but using the regional mean instead of the eigenvariate (a3) use a generalized form that allows a PPI for each task to be in the same model using either the regional mean of eigenvariate (b) create the model using the output of one of the (a) options and the first level design (c) estimate the model (/results directory) (d) compute the contrasts specified.

Proper citation: Generalized PPI Toolbox (RRID:SCR_009489) Copy   


  • RRID:SCR_007068

    This resource has 1+ mentions.

http://www.vivo.colostate.edu/molkit/index.html

The Molecular Toolkit is a group of programs for analysis and manipulation of nucleic acid and protein sequence data. The programs are written in Java (1.0) and require that your browser support this language. Also, it''s best if your monitor supports a screen resolution of at least 800x600. Nucleic Acid Analysis and Manipulation Programs: *Dot Plots - Examine the similarity of two DNA (or RNA) sequences by production of a similarity matrix displayed as a dot plot. *Manipulate and Display Sequences - Perform simple manipulations on a DNA sequence (inverse, complement, inverse-complement, double-stranded etc). *Restriction Maps - Generate graphical and text-based maps for restriction endonuclease cleavage of DNA. *Translate - Translate a DNA or RNA sequence and obtain graphical and text depictions of the resulting protein sequences. Protein Analysis Programs *Reverse Translate - Reverse translate a protein sequence into DNA. *Protein Composition - Obtain the amino acid composition of a protein. *Hydrophobicity Plots - Plot hydrophobic and hydrophilic domains of a protein.

Proper citation: Molecular Toolkit (RRID:SCR_007068) Copy   


http://www.phenote.org/

Phenote is both a complete piece of software and a software toolkit designed to facilitate the annotation of biological phenotypes using ontologies. It provides an interface and infrastructure to record genotype-phenotype pairs, together with the provenance for the annotation. Typical users of Phenote include literature curators, laboratory researchers, and clinicians looking for a method to record data in a user-friendly and computable way. Features of Phenote include the use of any OBO-format ontology, ontology navigation and term information display, bulk sort, copy, edit, and delete of phenotype-genotype character entries, and a variety of export formats. Phenote is a project of the Berkeley Bioinformatics Open-Source Projects (BBOP).

Proper citation: Phenote: A Phenotype Annotation Tool using Ontologies (RRID:SCR_008334) Copy   


  • RRID:SCR_006152

    This resource has 10+ mentions.

https://github.com/jiantao/Tangram

A C / C++ command line toolbox for structural variation (SV) detection that reports mobile element insertions (MEI). It takes advantage of both read-pair and split-read algorithms and is extremely fast and memory-efficient. Powered by the Bamtools API, it can call SV events on multiple BAM files (a population) simutaneously to increase the sensitivity on low-coverage dataset.

Proper citation: Tangram (RRID:SCR_006152) Copy   


http://www.cgat.org/~andreas/documentation/cgat/cgat.html

THIS RESOURCE IS NO LONGER IN SERVICE. Documented on January 3, 2023. A collection of tools for the computational genomicist written in the python language to assist in the analysis of genome scale data from a range of standard file formats. The toolkit enables filtering, comparison, conversion, summarization and annotation of genomic intervals, gene sets and sequences. The tools can both be run from the Unix command line and installed into visual workflow builders, such as Galaxy. Please note that the tools are part of a larger code base also including genomics and NGS pipelines. Everyone who uses parts of the CGAT code collection is encouraged to contribute. Contributions can take many forms: bugreports, bugfixes, new scripts and pipelines, documentation, tests, etc. All contributions are welcome.

Proper citation: Computational Genomics Analysis Tools (RRID:SCR_006390) Copy   


  • RRID:SCR_007360

    This resource has 1+ mentions.

http://dicom.offis.de/dcmtk.php.en

Software collection of libraries and applications implementing large parts of DICOM standard for medical image communication.Includes software for examining, constructing and converting DICOM image files, handling offline media, sending and receiving images over network connection, as well as demonstrative image storage and worklist servers.

Proper citation: DCMTK: DICOM Toolkit (RRID:SCR_007360) Copy   


  • RRID:SCR_006826

    This resource has 10+ mentions.

Ratings or validation data are available for this resource

http://cmic.cs.ucl.ac.uk/mig/index.php?n=Tutorial.NODDImatlab

This MATLAB toolbox implements a data fitting routine for Neurite Orientation Dispersion and Density Imaging (NODDI). NODDI is a new diffusion MRI technique for imaging brain tissue microstructure. Compared to DTI, it has the advantage of providing measures of tissue microstructure that are much more direct and hence more specific. It achieves this by adopting the model-based strategy which relates the signals from diffusion MRI to geometric models of tissue microstructure. In contrast to typical model-based techniques, NODDI is much more clinically feasible and can be acquired on standard MR scanners with an imaging time comparable to DTI.

Proper citation: NODDI Matlab Toolbox (RRID:SCR_006826) Copy   


  • RRID:SCR_011879

    This resource has 50+ mentions.

http://hadoop.apache.org/

Software library providing a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures. The project includes these modules: * Hadoop Common: The common utilities that support the other Hadoop modules. * Hadoop Distributed File System (HDFS): A distributed file system that provides high-throughput access to application data. * Hadoop YARN: A framework for job scheduling and cluster resource management. * Hadoop MapReduce: A YARN-based system for parallel processing of large data sets.

Proper citation: Apache Hadoop (RRID:SCR_011879) Copy   


  • RRID:SCR_014102

http://www.nitrc.org/projects/dti-denoising/

A Matlab package which contains six denoising filters and a noise estimation method for 4D DWI. The package includes nonlocal means, local PCA and Oracle DCT methods. Based on image redundancy and/or sparsity, the proposed filters provide efficient denoising while preserving fine structures.

Proper citation: DTI denoising (RRID:SCR_014102) Copy   


  • RRID:SCR_014427

    This resource has 1+ mentions.

https://github.com/missy139/PreSurgMapp

A MATLAB toolbox for processing the functional areas of the brain using multi-modal fMRI data for pre-surgical mapping. It is composed of three types of individual-level ICA analyses for user use. Traditional ICA (task) can be used for task fMRI. Either Traditional ICA (rest) or ICA with DICI (rest) can be used for rs-fMRI. Traditional ICA (rest) is designed for users who already have a hypothesis of the pattern of the target component and want to have manually set components by themselves. ICA with DICI (rest) is completely automatic, given that the user provides a template (provided). The software utilizes an automatic component identification method that is based on the discriminatory-index. All the components from multiple ICA runs with multiple component settings are ranked and compiled.

Proper citation: PreSurgMapp (RRID:SCR_014427) Copy   


http://www.rdkit.org/

An open-source cheminformatics and machine-learning toolkit that is useable from Java or Python. It includes a collection of standard cheminformatics functionality for molecule I/O, substructure searching, chemical reactions, coordinate generation (2D or 3D), fingerprinting, etc., as well as a high-performance database cartridge for working with molecules using the PostgreSQL database. Documentation is available on the main website.

Proper citation: RDKit: Open-Source Cheminformatics Software (RRID:SCR_014274) Copy   


http://nemo.nic.uoregon.edu/wiki/NEMO_Analysis_Toolkit

THIS RESOURCE IS NO LONGER IN SERVICE. NIH tombstone webpage lists Project Period : 2009 - 2013. The NEMO ERP Analysis Toolkit includes tools for EEG/ERP and MEG data decomposition, and ontology-based mark-up, annotation, and labeling of patterns in EEG and MEG data. These tools have been implemented in MATLAB by Robert Frank, a mathematician and data analyst for NEMO. The current NEMO analysis pipeline has been designed with the aim to support cross-lab, cross-experiment meta-analysis of EEG and MEG data. The current proposed processing pipeline consists of the following steps: * Step 1: Decomposing ERP data (continuous data are transformed into discrete patterns for analysis) o PCA/ ICA/Microstate * Step 2: Marking up the analysis results o Each pattern is annotated with labels that relate pattern attributes to NEMO ontology concepts * Step 3: Clustering the observed patterns within and across experimental groups * Step 4: Labeling the cross-experiment clusters Each item in the above list is a step/phase in the processing pipeline and is associated with a set of matlab scripts in our NEMO ERP Analysis Toolkit, which is implemented by a collection of MATLAB scripts.

Proper citation: NEMO Analysis Toolkit (RRID:SCR_013624) Copy   


  • RRID:SCR_014558

    This resource has 100+ mentions.

http://prospector.ucsf.edu

A package of over twenty mass spectrometry-based tools primarily geared toward proteomic data analysis and database mining. It can be run from the command line, but is primarily used through a web browser, and there is a public website that allows anyone to use the software without local installation. Tandem mass spectrometry analysis tools are used for database searching and identification of peptides, including post-translationally modified peptides and cross-linked peptides. Support for isotope and label-free quantification from this type of data is provided. MS-Viewer software allows sharing and displaying of annotated spectra from many different tandem mass spectrometry data analysis packages. Other tools include software for analyzing peptide mass fingerprinting data (MS-Fit); prediction of theoretical fragmentation of peptides (MS-Product); theoretical chemical or enzymatic digestion of proteins (MS-Digest); and theoretical modeling of the isotope distribution of any chemical, including peptides (MS-Isotope). Searches using amino acid sequence can be used to identify homologous peptides in a database (MS-Pattern); the use of the combination of amino acid sequence and masses can be used for homologous peptide and protein identification using MS-Homology. Tandem mass spectrometry peak list files can be filtered for the presence of certain peaks or neutral losses using MS-Filter. Given a list of proteins, MS-Bridge can report all potential cross-linked peptide combinations of a specified mass. Given a precursor peptide mass and information about known amino acid presence, absence, or modifications, MS-Comp can report all amino acid combinations that could lead to the observed mass.

Proper citation: Protein Prospector (RRID:SCR_014558) Copy   


  • RRID:SCR_013505

    This resource has 5000+ mentions.

https://CRAN.R-project.org/package=cluster

Software R package. Methods for Cluster analysis. Performs variety of types of cluster analysis and other types of processing on large microarray datasets.

Proper citation: Cluster (RRID:SCR_013505) Copy   


  • RRID:SCR_015506

    This resource has 1+ mentions.

https://github.com/MicrosoftGenomics/FaST-LMM

FaST-LMM (Factored Spectrally Transformed Linear Mixed Models) is a set of tools for efficiently performing genome-wide association studies (GWAS), prediction, and heritability estimation on large data sets.

Proper citation: FaST LMM (RRID:SCR_015506) Copy   



Can't find your Tool?

We recommend that you click next to the search bar to check some helpful tips on searches and refine your search firstly. Alternatively, please register your tool with the SciCrunch Registry by adding a little information to a web form, logging in will enable users to create a provisional RRID, but it not required to submit.

Can't find the RRID you're searching for? X
  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Sources

    Here are the sources that were queried against in your search that you can investigate further.

  9. Categories

    Here are the categories present within FDI Lab - SciCrunch.org that you can filter your data on

  10. Subcategories

    Here are the subcategories present within this category that you can filter your data on

  11. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

X