This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.
Calibration of a microarray scanner is critical for accurate interpretation of microarray results. Shi et al. (BMC Bioinformatics, 2005, 6, Art. No. S11 Suppl. 2.) reported usage of a Full Moon BioSystems slide for calibration. Inspired by the Shi et al. work, we have calibrated microarray scanners in our previous research. We were puzzled however, that most of the signal intensities from a biological sample fell below the sensitivity threshold level determined by the calibration slide. This conundrum led us to re-investigate the quality of calibration provided by the Full Moon BioSystems slide as well as the accuracy of the analysis performed by Shi et al.
In order to determine camera parameters, a calibration procedure involving the camera recordings of a checkerboard is usually performed. In this paper, we propose an alternative approach that uses Gray-code patterns displayed on an LCD screen. Gray-code patterns allow us to decode 3D location information of points of the LCD screen at every pixel in the camera image. This is in contrast to checkerboard patterns where the number of corresponding locations is limited to the number of checkerboard corners. We show that, for the case of a UEye CMOS camera, the precision of focal-length estimation is 1.5 times more precise than when using a standard calibration with a checkerboard pattern.
Applications, advantages, and limitations of the traditional external standard calibration, matrix-matched calibration, internal standardization, and standard additions, as well as the non-traditional interference standard method, standard dilution analysis, multi-isotope calibration, and multispecies calibration methods are discussed.
Radiometric cross-calibration between the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and the Terra-Moderate Resolution Imaging Spectroradiometer (MODIS) has been partially used to derive the ASTER radiometric calibration coefficient (RCC) curve as a function of date on visible to near-infrared bands. However, cross-calibration is not sufficiently accurate, since the effects of the differences in the sensor's spectral and spatial responses are not fully mitigated. The present study attempts to evaluate radiometric consistency across two sensors using an improved cross-calibration algorithm to address the spectral and spatial effects and derive cross-calibration-based RCCs, which increases the ASTER calibration accuracy. Overall, radiances measured with ASTER bands 1 and 2 are on averages 3.9% and 3.6% greater than the ones measured on the same scene with their MODIS counterparts and ASTER band 3N (nadir) is 0.6% smaller than its MODIS counterpart in current radiance/reflectance products. The percentage root mean squared errors (%RMSEs) between the radiances of two sensors are 3.7, 4.2, and 2.3 for ASTER band 1, 2, and 3N, respectively, which are slightly greater or smaller than the required ASTER radiometric calibration accuracy (4%). The uncertainty of the cross-calibration is analyzed by elaborating the error budget table to evaluate the International System of Units (SI)-traceability of the results. The use of the derived RCCs will allow further reduction of errors in ASTER radiometric calibration and subsequently improve interoperability across sensors for synergistic applications.
The sound fed to a loudspeaker may significantly differ from that reaching the ear of the listener. The transformation from one to the other consists of spectral distortions with strong dependence on the relative locations of the speaker and the listener as well as on the geometry of the environment. With the increased importance of research in awake, freely-moving animals in large arenas, it becomes important to understand how animal location influences the corresponding spectral distortions.
Line scanning cameras, which capture only a single line of pixels, have been increasingly used in ground based mobile or robotic platforms. In applications where it is advantageous to directly georeference the camera data to world coordinates, an accurate estimate of the camera's 6D pose is required. This paper focuses on the common case where a mobile platform is equipped with a rigidly mounted line scanning camera, whose pose is unknown, and a navigation system providing vehicle body pose estimates. We propose a novel method that estimates the camera's pose relative to the navigation system. The approach involves imaging and manually labelling a calibration pattern with distinctly identifiable points, triangulating these points from camera and navigation system data and reprojecting them in order to compute a likelihood, which is maximised to estimate the 6D camera pose. Additionally, a Markov Chain Monte Carlo (MCMC) algorithm is used to estimate the uncertainty of the offset. Tested on two different platforms, the method was able to estimate the pose to within 0.06 m/1.05 ∘ and 0.18 m/2.39 ∘ . We also propose several approaches to displaying and interpreting the 6D results in a human readable way.
Bisulfite amplicon sequencing has become the primary choice for single-base methylation quantification of multiple targets in parallel. The main limitation of this technology is a preferential amplification of an allele and strand in the PCR due to methylation state. This effect, known as 'PCR bias', causes inaccurate estimation of the methylation levels and calibration methods based on standard controls have been proposed to correct for it. Here, we present a Bayesian calibration tool, MethylCal, which can analyse jointly all CpGs within a CpG island (CGI) or a Differentially Methylated Region (DMR), avoiding 'one-at-a-time' CpG calibration. This enables more precise modeling of the methylation levels observed in the standard controls. It also provides accurate predictions of the methylation levels not considered in the controlled experiment, a feature that is paramount in the derivation of the corrected methylation degree. We tested the proposed method on eight independent assays (two CpG islands and six imprinting DMRs) and demonstrated its benefits, including the ability to detect outliers. We also evaluated MethylCal's calibration in two practical cases, a clinical diagnostic test on 18 patients potentially affected by Beckwith-Wiedemann syndrome, and 17 individuals with celiac disease. The calibration of the methylation levels obtained by MethylCal allows a clearer identification of patients undergoing loss or gain of methylation in borderline cases and could influence further clinical or treatment decisions.
Proper calibration of eye movement signal registered by an eye tracker seems to be one of the main challenges in popularizing eye trackers as yet another user-input device. Classic calibration methods taking time and imposing unnatural behavior on eyes must be replaced by intelligent methods that are able to calibrate the signal without conscious cooperation by the user. Such an implicit calibration requires some knowledge about the stimulus a user is looking at and takes into account this information to predict probable gaze targets. This paper describes a possible method to perform implicit calibration: it starts with finding probable fixation targets (PFTs), then it uses these targets to build a mapping-probable gaze path. Various algorithms that may be used for finding PFTs and mappings are presented in the paper and errors are calculated using two datasets registered with two different types of eye trackers. The results show that although for now the implicit calibration provides results worse than the classic one, it may be comparable with it and sufficient for some applications.
Ordinary differential equation models are nowadays widely used for the mechanistic description of biological processes and their temporal evolution. These models typically have many unknown and nonmeasurable parameters, which have to be determined by fitting the model to experimental data. In order to perform this task, known as parameter estimation or model calibration, the modeller faces challenges such as poor parameter identifiability, lack of sufficiently informative experimental data and the existence of local minima in the objective function landscape. These issues tend to worsen with larger model sizes, increasing the computational complexity and the number of unknown parameters. An incorrectly calibrated model is problematic because it may result in inaccurate predictions and misleading conclusions. For nonexpert users, there are a large number of potential pitfalls. Here, we provide a protocol that guides the user through all the steps involved in the calibration of dynamic models. We illustrate the methodology with two models and provide all the code required to reproduce the results and perform the same analysis on new models. Our protocol provides practitioners and researchers in biological modelling with a one-stop guide that is at the same time compact and sufficiently comprehensive to cover all aspects of the problem.
With the development of MEMS sensors, the magnetometer has increasingly become a part of various wearable devices. The magnetometer measures the intensity of the magnetic field in all three axes, resulting in a 3D vector-direction and power. Calibration must be done before using a magnetometer, especially in wearable electronics, due to the low quality of the sensor and high proximity to other electromagnetic emission sources. Several magnetometer calibration algorithms exist in the literature, with most of them requiring multi-sided rotation. However, such calibration is highly impractical when the sensor is mounted on larger objects, e.g., vehicles, which cannot easily be rotated. Vehicles contain a large amount of ferromagnetic soft and hard material that affects the measured magnetic field. A magnetometer can be useful for an INS system in a car as long as it does not drift over time. This article describes how to calibrate a magnetometer using the GNSS motion vector. The calibration is performed using data from the initial section of the vehicle's trajectory. The quality of the calibration is then validated using the remaining section of the trajectory, comparing the deviation between the azimuth obtained by GNSS and by the calibrated magnetometer. Based on the azimuth and speed of the vehicle, we predicted the position of the vehicle and plotted the prediction on the map. The experiment showed that such calibration is functional. The uncalibrated data were unusable due to the strong effect of ferromagnetic soft and hard materials in the vehicle.
The calibrated BOLD (blood oxygen level dependent) technique was developed to quantify the BOLD signal in terms of changes in oxygen metabolism. In order to achieve this a calibration experiment must be performed, which typically requires a hypercapnic gas mixture to be administered to the participant. However, an emerging technique seeks to perform this calibration without administering gases using a refocussing based calibration. Whilst hypercapnia calibration seeks to emulate the physical removal of deoxyhaemoglobin from the blood, the aim of refocussing based calibration is to refocus the dephasing effect of deoxyhaemoglobin on the MR signal using a spin echo. However, it is not possible to refocus all of the effects that contribute to the BOLD signal and a scale factor is required to estimate the BOLD scaling parameter M. In this study the feasibility of a refocussing based calibration was investigated. The scale factor relating the refocussing calibration to M was predicted by simulations to be approximately linear and empirically measured to be 0.88±0.36 for the visual cortex and 0.93±0.32 for a grey matter region of interest (mean±standard deviation). Refocussing based calibration is a promising approach for greatly simplifying the calibrated BOLD methodology by eliminating the need for the subject to breathe special gas mixtures, and potentially provides the basis for a wider implementation of quantitative functional MRI.
The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function.
The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.
Machine-generated data expansion is a global phenomenon in recent Internet services. The proliferation of mobile communication and smart devices has increased the utilization of machine-generated data significantly. One of the most promising applications of machine-generated data is the estimation of the location of smart devices. The motion sensors integrated into smart devices generate continuous data that can be used to estimate the location of pedestrians in an indoor environment. We focus on the estimation of the accurate location of smart devices by determining the landmarks appropriately for location error calibration. In the motion sensor-based location estimation, the proposed threshold control method determines valid landmarks in real time to avoid the accumulation of errors. A statistical method analyzes the acquired motion sensor data and proposes a valid landmark for every movement of the smart devices. Motion sensor data used in the testbed are collected from the actual measurements taken throughout a commercial building to demonstrate the practical usefulness of the proposed method.
The interaction between matter and electromagnetic radiation provides a rich understanding of what the matter is composed of and how it can be quantified using spectrometers. In many cases, however, the calibration of the spectrometer changes as a function of time (such as in electron spectrometers), or the absolute calibration may be different between different instruments. Calibration differences cause difficulties in comparing the absolute position of measured emission or absorption peaks between different instruments and even different measurements taken at different times on the same instrument. Present methods of avoiding this issue involve manual feature extraction of the original signal or qualitative analysis. Here we propose automated feature extraction using deep convolutional neural networks to determine the class of compound given only the shape of the spectrum. We classify three unique electronic environments of manganese (being relevant to many battery materials applications) in electron energy loss spectroscopy using 2001 spectra we collected in addition to testing on spectra from different instruments. We test a variety of commonly used neural network architectures found in the literature and propose a new fully convolutional architecture with improved translation-invariance which is immune to calibration differences.
Knowledge of precise camera poses is vital for multi-camera setups. Camera intrinsics can be obtained for each camera separately in lab conditions. For fixed multi-camera setups, the extrinsic calibration can only be done in situ. Usually, some markers are used, like checkerboards, requiring some level of overlap between cameras. In this work, we propose a method for cases with little or no overlap. Laser lines are projected on a plane (e.g., floor or wall) using a laser line projector. The pose of the plane and cameras is then optimized using bundle adjustment to match the lines seen by the cameras. To find the extrinsic calibration, only a partial overlap between the laser lines and the field of view of the cameras is needed. Real-world experiments were conducted both with and without overlapping fields of view, resulting in rotation errors below 0.5°. We show that the accuracy is comparable to other state-of-the-art methods while offering a more practical procedure. The method can also be used in large-scale applications and can be fully automated.
Electrochemical aptamer-based (EAB) sensors support the real-time, high frequency measurement of pharmaceuticals and metabolites in-situ in the living body, rendering them a potentially powerful technology for both research and clinical applications. Here we explore quantification using EAB sensors, examining the impact of media selection and temperature on measurement performance. Using freshly-collected, undiluted whole blood at body temperature as both our calibration and measurement conditions, we demonstrate accuracy of better than ± 10% for the measurement of our test bed drug, vancomycin. Comparing titrations collected at room and body temperature, we find that matching the temperature of calibration curve collection to the temperature used during measurements improves quantification by reducing differences in sensor gain and binding curve midpoint. We likewise find that, because blood age impacts the sensor response, calibrating in freshly collected blood can improve quantification. Finally, we demonstrate the use of non-blood proxy media to achieve calibration without the need to collect fresh whole blood.
Current process of calibrating radiation thermometers, including thermal imagers, relies on measurement comparison with the temperature of a black body at a set distance. Over time, errors have been detected in calibrations of some radiation thermometers, which were correlated with moisture levels. In this study, effects of atmospheric air on thermal transmission were evaluated by the means of simulations using best available resources of the corresponding datasets. Sources of spectral transmissivity of air were listed, and transmissivity data was obtained from the HITRAN molecular absorption database. Transmissivity data of molecular species was compiled for usual atmospheric composition, including naturally occurring isotopologs. Final influence of spectral transmissivity was evaluated for spectral sensitivities of radiation thermometers in use, and total transmissivity and expected errors were presented for variable humidity and measured temperature. Results reveal that spectral range of measurements greatly influences susceptibility of instruments to atmospheric interference. In particular, great influence on measurements is evident for the high-temperature radiation pyrometer in the spectral range of 2-2.7 µm, which is in use in our laboratory as a traceable reference for high-temperature calibrations. Regarding the calibration process, a requirement arose for matching the humidity parameters during the temperature reference transfer to the lower tiers in the chain of traceability. Narrowing of the permitted range of humidity during the calibration, monitoring, and listing of atmospheric parameters in calibration certificates is necessary, for at least this thermometer and possibly for other thermometers as well.
Whole-genome sequencing (WGS) is the gold standard for fully characterizing genetic variation but is still prohibitively expensive for large samples. To reduce costs, many studies sequence only a subset of individuals or genomic regions, and genotype imputation is used to infer genotypes for the remaining individuals or regions without sequencing data. However, not all variants can be well imputed, and the current state-of-the-art imputation quality metric, denoted as standard Rsq, is poorly calibrated for lower-frequency variants. Here, we propose MagicalRsq, a machine-learning-based method that integrates variant-level imputation and population genetics statistics, to provide a better calibrated imputation quality metric. Leveraging WGS data from the Cystic Fibrosis Genome Project (CFGP), and whole-exome sequence data from UK BioBank (UKB), we performed comprehensive experiments to evaluate the performance of MagicalRsq compared to standard Rsq for partially sequenced studies. We found that MagicalRsq aligns better with true R2 than standard Rsq in almost every situation evaluated, for both European and African ancestry samples. For example, when applying models trained from 1,992 CFGP sequenced samples to an independent 3,103 samples with no sequencing but TOPMed imputation from array genotypes, MagicalRsq, compared to standard Rsq, achieved net gains of 1.4 million rare, 117k low-frequency, and 18k common variants, where net gains were gained numbers of correctly distinguished variants by MagicalRsq over standard Rsq. MagicalRsq can serve as an improved post-imputation quality metric and will benefit downstream analysis by better distinguishing well-imputed variants from those poorly imputed. MagicalRsq is freely available on GitHub.
Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.
You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.
If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.
Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:
You can save any searches you perform for quick access to later from here.
We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.
If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.
Here are the facets that you can filter your papers by.
From here we'll present any options for the literature, such as exporting your current results.
If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.
Year:
Count: