This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.
The development of automated high-intensity macromolecular crystallography (MX) beamlines at synchrotron facilities has resulted in a remarkable increase in sample throughput. Developments in X-ray detector technology now mean that complete X-ray diffraction datasets can be collected in less than one minute. Such high-speed collection, and the volumes of data that it produces, often make it difficult for even the most experienced users to cope with the deluge. However, the careful reduction of data during experimental sessions is often necessary for the success of a particular project or as an aid in decision making for subsequent experiments. Automated data reduction pipelines provide a fast and reliable alternative to user-initiated processing at the beamline. In order to provide such a pipeline for the MX user community of the European Synchrotron Radiation Facility (ESRF), a system for the rapid automatic processing of MX diffraction data from single and multiple positions on a single or multiple crystals has been developed. Standard integration and data analysis programs have been incorporated into the ESRF data collection, storage and computing environment, with the final results stored and displayed in an intuitive manner in the ISPyB (information system for protein crystallography beamlines) database, from which they are also available for download. In some cases, experimental phase information can be automatically determined from the processed data. Here, the system is described in detail.
Next Generation Sequencing is highly resource intensive. NGS Tasks related to data processing, management and analysis require high-end computing servers or even clusters. Additionally, processing NGS experiments requires suitable storage space and significant manual interaction. At The Ohio State University's Biomedical Informatics Shared Resource, we designed and implemented a scalable architecture to address the challenges associated with the resource intensive nature of NGS secondary analysis built around Illumina Genome Analyzer II sequencers and Illumina's Gerald data processing pipeline. The software infrastructure includes a distributed computing platform consisting of a LIMS called QUEST (http://bisr.osumc.edu), an Automation Server, a computer cluster for processing NGS pipelines, and a network attached storage device expandable up to 40TB. The system has been architected to scale to multiple sequencers without requiring additional computing or labor resources. This platform provides demonstrates how to manage and automate NGS experiments in an institutional or core facility setting.
Music perception builds on expectancy in harmony, melody, and rhythm. Neural responses to the violations of such expectations are observed in event-related potentials (ERPs) measured using electroencephalography. Most previous ERP studies demonstrating sensitivity to musical violations used stimuli that were temporally regular and musically structured, with less-frequent deviant events that differed from a specific expectation in some feature such as pitch, harmony, or rhythm. Here, we asked whether expectancies about Western musical scale are strong enough to elicit ERP deviance components. Specifically, we explored whether pitches inconsistent with an established scale context elicit deviant components even though equally rare pitches that fit into the established context do not, and even when their timing is unpredictable. We used Markov chains to create temporally irregular pseudo-random sequences of notes chosen from one of two diatonic scales. The Markov pitch-transition probabilities resulted in sequences that favored notes within the scale, but that lacked clear melodic, harmonic, or rhythmic structure. At the random positions, the sequence contained probe tones that were either within the established scale or were out of key. Our subjects ignored the note sequences, watching a self-selected silent movie with subtitles. Compared to the in-key probes, the out-of-key probes elicited a significantly larger P2 ERP component. Results show that random note sequences establish expectations of the "first-order" statistical property of musical key, even in listeners not actively monitoring the sequences.
Semantic processing of visually presented words can be identified both on behavioral and neurophysiological evidence. One of the major discoveries of the last decades is the demonstration that these signatures of semantic processing, initially observed for consciously perceived words, can also be detected for masked words inaccessible to conscious reports. In this context, the distinction between conscious and unconscious verbal semantic processing constitutes a challenging scientific issue. A prominent view considered that while conscious representations are subject to executive control, unconscious ones would operate automatically in a modular way, independent from control and top-down influences. Recent findings challenged this view by revealing that endogenous attention and task-setting can have a strong influence on unconscious processing. However, one of the major arguments supporting the automaticity of unconscious semantic processing still stands, stemming from a seminal observation reported by Marcel in 1980 about polysemous words. In the present study we reexamined this evidence. We present a combination of behavioral and event-related-potentials (ERPs) results that refute this view by showing that the current conscious semantic context has a major and similar influence on the semantic processing of both visible and masked polysemous words. In a classical lexical decision task, a polysemous word was preceded by a word which defined the current semantic context. Crucially, this context was associated with only one of the two meanings of the polysemous word, and was followed by a word/pseudo-word target. Behavioral and electrophysiological evidence of semantic priming of target words by masked polysemous words was strongly dependent on the conscious context. Moreover, we describe a new type of influence related to the response-code used to answer for target words in the lexical decision task: unconscious semantic priming constrained by the conscious context was present both in behavior and ERPs exclusively when right-handed subjects were instructed to respond to words with their right hand. The strong and respective influences of conscious context and response-code on semantic processing of masked polysemous words demonstrate that unconscious verbal semantic representations are not automatic.
Observers can selectively attend to object features that are relevant for a task. However, unattended task-irrelevant features may still be processed and possibly integrated with the attended features. This study investigated the neural mechanisms for processing both task-relevant (attended) and task-irrelevant (unattended) object features. The Garner paradigm was adapted for functional magnetic resonance imaging (fMRI) to test whether specific brain areas process the conjunction of features or whether multiple interacting areas are involved in this form of feature integration. Observers attended to shape, color, or non-rigid motion of novel objects while unattended features changed from trial to trial (change blocks) or remained constant (no-change blocks) during a given block. This block manipulation allowed us to measure the extent to which unattended features affected neural responses which would reflect the extent to which multiple object features are automatically processed. We did not find Garner interference at the behavioral level. However, we designed the experiment to equate performance across block types so that any fMRI results could not be due solely to differences in task difficulty between change and no-change blocks. Attention to specific features localized several areas known to be involved in object processing. No area showed larger responses on change blocks compared to no-change blocks. However, psychophysiological interaction (PPI) analyses revealed that several functionally-localized areas showed significant positive interactions with areas in occipito-temporal and frontal areas that depended on block type. Overall, these findings suggest that both regional responses and functional connectivity are crucial for processing multi-featured objects.
Successfully navigating the world requires avoiding boundaries and obstacles in one's immediately-visible environment, as well as finding one's way to distant places in the broader environment. Recent neuroimaging studies suggest that these two navigational processes involve distinct cortical scene processing systems, with the occipital place area (OPA) supporting navigation through the local visual environment, and the retrosplenial complex (RSC) supporting navigation through the broader spatial environment. Here we hypothesized that these systems are distinguished not only by the scene information they represent (i.e., the local visual versus broader spatial environment), but also based on the automaticity of the process they involve, with navigation through the broader environment (including RSC) operating deliberately, and navigation through the local visual environment (including OPA) operating automatically. We tested this hypothesis using fMRI and a maze-navigation paradigm, where participants navigated two maze structures (complex or simple, testing representation of the broader spatial environment) under two conditions (active or passive, testing deliberate versus automatic processing). Consistent with the hypothesis that RSC supports deliberate navigation through the broader environment, RSC responded significantly more to complex than simple mazes during active, but not passive navigation. By contrast, consistent with the hypothesis that OPA supports automatic navigation through the local visual environment, OPA responded strongly even during passive navigation, and did not differentiate between active versus passive conditions. Taken together, these findings suggest the novel hypothesis that navigation through the broader spatial environment is deliberate, whereas navigation through the local visual environment is automatic, shedding new light on the dissociable functions of these systems.
Modular toolkit for Data Processing (MDP) is a data processing framework written in Python. From the user's perspective, MDP is a collection of supervised and unsupervised learning algorithms and other data processing units that can be combined into data processing sequences and more complex feed-forward network architectures. Computations are performed efficiently in terms of speed and memory requirements. From the scientific developer's perspective, MDP is a modular framework, which can easily be expanded. The implementation of new algorithms is easy and intuitive. The new implemented units are then automatically integrated with the rest of the library. MDP has been written in the context of theoretical research in neuroscience, but it has been designed to be helpful in any context where trainable data processing algorithms are used. Its simplicity on the user's side, the variety of readily available algorithms, and the reusability of the implemented units make it also a useful educational tool.
Bone regeneration is a critical area of research impacting treatment of diseases such as osteoporosis, age-related decline, and orthopaedic implants. A crucial question in bone regeneration is that of bone architectural quality, or how "good" is the regenerated bone tissue structurally? Current methods address typical long bone architecture, however there exists a need for improved ability to quantify structurally relevant parameters of bone in non-standard bone shapes. Here we present a new analysis approach based on open-source semi-automatic methods combining image processing, solid modeling, and numerical calculations to analyze bone tissue at a more granular level using μCT image data from a mouse digit model of bone regeneration. Examining interior architecture, growth patterning, spatial mineral content, and mineral density distribution, these methods are then applied to two types of 6-month old mouse digits - 1) those prior to amputation injury (unamputated) and 2) those 42 days after amputation when bone has regenerated. Results show regenerated digits exhibit increased inner void fraction, decreased patterning, different patterns of spatial mineral distribution, and increased mineral density values when compared to unamputated bone. Our approach demonstrates the utility of this new analysis technique in assessment of non-standard bone models, such as the regenerated bone of the digit, and aims to bring a deeper level of analysis with an open-source, integrative platform to the greater bone community.
Functional MRI resting state and connectivity studies of brain focus on neural fluctuations at low frequencies which share power with physiological fluctuations originating from lung and heart. Due to the lack of automated software to process physiological signals collected at high magnetic fields, a gap exists in the processing pathway between the acquisition of physiological data and its use in fMRI software for both physiological noise correction and functional analyses of brain activation and connectivity. To fill this gap, we developed an open source, physiological signal processing program, called PhysioNoise, in the python language. We tested its automated processing algorithms and dynamic signal visualization on resting monkey cardiac and respiratory waveforms. PhysioNoise consistently identifies physiological fluctuations for fMRI noise correction and also generates covariates for subsequent analyses of brain activation and connectivity.
Recent EEG-fMRI studies have shown that different stages of sleep are associated with changes in both brain activity and functional connectivity. These results raise the concern that lack of vigilance measures in resting state experiments may introduce confounds and contamination due to subjects falling asleep inside the scanner. In this study we present a method to perform automatic sleep staging using only fMRI functional connectivity data, thus providing vigilance information while circumventing the technical demands of simultaneous recording of EEG, the gold standard for sleep scoring. The features to classify are the linear correlation values between 20 cortical regions identified using independent component analysis and two regions in the bilateral thalamus. The method is based on the construction of binary support vector machine classifiers discriminating between all pairs of sleep stages and the subsequent combination of them into multiclass classifiers. Different multiclass schemes and kernels are explored. After parameter optimization through 5-fold cross validation we achieve accuracies over 0.8 in the binary problem with functional connectivities obtained for epochs as short as 60s. The multiclass classifier generalizes well to two independent datasets (accuracies over 0.8 in both sets) and can be efficiently applied to any dataset using a sliding window procedure. Modeling vigilance states in resting state analysis will avoid confounded inferences and facilitate the study of vigilance states themselves. We thus consider the method introduced in this study a novel and practical contribution for monitoring vigilance levels inside an MRI scanner without the need of extra recordings other than fMRI BOLD signals.
Alexithymia is a personality trait characterized by difficulties identifying and describing feelings, an externally oriented style of thinking, and a reduced inclination to imagination. Previous research has shown deficits in the recognition of emotional facial expressions in alexithymia and reductions of brain responsivity to emotional stimuli. Using an affective priming paradigm, we investigated automatic perception of facial emotions as a function of alexithymia at the behavioral and neural level. In addition to self-report scales, we applied an interview to assess alexithymic tendencies.
Automatic speech processing (ASP) has recently been applied to very large datasets of naturalistically collected, daylong recordings of child speech via an audio recorder worn by young children. The system developed by the LENA Research Foundation analyzes children's speech for research and clinical purposes, with special focus on of identifying and tagging family speech dynamics and the at-home acoustic environment from the auditory perspective of the child. A primary issue for researchers, clinicians, and families using the Language ENvironment Analysis (LENA) system is to what degree the segment labels are valid. This classification study evaluates the performance of the computer ASP output against 23 trained human judges who made about 53,000 judgements of classification of segments tagged by the LENA ASP. Results indicate performance consistent with modern ASP such as those using HMM methods, with acoustic characteristics of fundamental frequency and segment duration most important for both human and machine classifications. Results are likely to be important for interpreting and improving ASP output.
How functional magnetic resonance imaging (fMRI) data are analyzed depends on the researcher and the toolbox used. It is not uncommon that the processing pipeline is rewritten for each new dataset. Consequently, code transparency, quality control and objective analysis pipelines are important for improving reproducibility in neuroimaging studies. Toolboxes, such as Nipype and fMRIPrep, have documented the need for and interest in automated pre-processing analysis pipelines. Recent developments in data-driven models combined with high resolution neuroimaging dataset have strengthened the need not only for a standardized preprocessing workflow, but also for a reliable and comparable statistical pipeline. Here, we introduce fMRIflows: a consortium of fully automatic neuroimaging pipelines for fMRI analysis, which performs standard preprocessing, as well as 1st- and 2nd-level univariate and multivariate analyses. In addition to the standardized pre-processing pipelines, fMRIflows provides flexible temporal and spatial filtering to account for datasets with increasingly high temporal resolution and to help appropriately prepare data for advanced machine learning analyses, improving signal decoding accuracy and reliability. This paper first describes fMRIflows' structure and functionality, then explains its infrastructure and access, and lastly validates the toolbox by comparing it to other neuroimaging processing pipelines such as fMRIPrep, FSL and SPM. This validation was performed on three datasets with varying temporal sampling and acquisition parameters to prove its flexibility and robustness. fMRIflows is a fully automatic fMRI processing pipeline which uniquely offers univariate and multivariate single-subject and group analyses as well as pre-processing.
Image segmentation of medical images is a challenging problem with several still not totally solved issues, such as noise interference and image artifacts. Region-based and histogram-based segmentation methods have been widely used in image segmentation. Problems arise when we use these methods, such as the selection of a suitable threshold value for the histogram-based method and the over-segmentation followed by the time-consuming merge processing in the region-based algorithm. To provide an efficient approach that not only produce better results, but also maintain low computational complexity, a new region dividing based technique is developed for image segmentation, which combines the advantages of both regions-based and histogram-based methods. The proposed method is applied to the challenging applications: Gray matter (GM), White matter (WM) and cerebro-spinal fluid (CSF) segmentation in brain MR Images. The method is evaluated on both simulated and real data, and compared with other segmentation techniques. The obtained results have demonstrated its improved performance and robustness.
The brain activity associated with processing numerical end values has received limited research attention. The present study explored the neural correlates associated with processing semantic end values under conditions of automatic number processing. Event-related potentials (ERPs) were recorded while participants performed the numerical Stroop task, in which they were asked to compare the physical size of pairs of numbers, while ignoring their numerical values. The smallest end value in the set, which is a task irrelevant factor, was manipulated between participant groups. We focused on the processing of the lower end values of 0 and 1 because these numbers were found to be automatically tagged as the "smallest." Behavioral results showed that the size congruity effect was modulated by the presence of the smallest end value in the pair. ERP data revealed a spatially extended centro-parieto-occipital P3 that was enhanced for congruent versus incongruent trials. Importantly, over centro-parietal sites, the P3 congruity effect (congruent minus incongruent) was larger for pairs containing the smallest end value than for pairs containing non-smallest values. These differences in the congruency effect were localized to the precuneus. The presence of an end value within the pair also modulated P3 latency. Our results provide the first neural evidence for the encoding of numerical end values. They further demonstrate that the use of end values as anchors is a primary aspect of processing symbolic numerical information.
Determining new protein structures from X-ray diffraction data at low resolution or with a weak anomalous signal is a difficult and often an impossible task. Here we propose a multivariate algorithm that simultaneously combines the structure determination steps. In tests on over 140 real data sets from the protein data bank, we show that this combined approach can automatically build models where current algorithms fail, including an anisotropically diffracting 3.88 Å RNA polymerase II data set. The method seamlessly automates the process, is ideal for non-specialists and provides a mathematical framework for successfully combining various sources of information in image processing.
The oculomotor nerve (OCN) is the main motor nerve innervating eye muscles and can be involved in multiple flammatory, compressive, or pathologies. The diffusion magnetic resonance imaging (dMRI) tractography is now widely used to describe the trajectory of the OCN. However, the complex cranial structure leads to difficulties in fiber orientation distribution (FOD) modeling, fiber tracking, and region of interest (ROI) selection. Currently, the identification of OCN relies on expert manual operation, resulting in challenges, such as the carries high clinical, time-consuming, and labor costs. Thus, we propose a method that can automatically identify OCN from dMRI tractography. First, we choose the multi-shell multi-tissue constraint spherical deconvolution (MSMT-CSD) FOD estimation model and deterministic tractography to describe the 3D trajectory of the OCN. Then, we rely on the well-established computational pipeline and anatomical expertise to create a data-driven OCN tractography atlas from 40 HCP data. We identify six clusters belonging to the OCN from the atlas, including the structures of three kinds of positional relationships (pass between, pass through, and go around) with the red nuclei and two kinds of positional relationships with medial longitudinal fasciculus. Finally, we apply the proposed OCN atlas to identify the OCN automatically from 40 new HCP subjects and two patients with brainstem cavernous malformation. In terms of spatial overlap and visualization, experiment results show that the automatically and manually identified OCN fibers are consistent. Our proposed OCN atlas provides an effective tool for identifying OCN by avoiding the traditional selection strategy of ROIs.
Recent years have seen neuroimaging data sets becoming richer, with larger cohorts of participants, a greater variety of acquisition techniques, and increasingly complex analyses. These advances have made data analysis pipelines complicated to set up and run (increasing the risk of human error) and time consuming to execute (restricting what analyses are attempted). Here we present an open-source framework, automatic analysis (aa), to address these concerns. Human efficiency is increased by making code modular and reusable, and managing its execution with a processing engine that tracks what has been completed and what needs to be (re)done. Analysis is accelerated by optional parallel processing of independent tasks on cluster or cloud computing resources. A pipeline comprises a series of modules that each perform a specific task. The processing engine keeps track of the data, calculating a map of upstream and downstream dependencies for each module. Existing modules are available for many analysis tasks, such as SPM-based fMRI preprocessing, individual and group level statistics, voxel-based morphometry, tractography, and multi-voxel pattern analyses (MVPA). However, aa also allows for full customization, and encourages efficient management of code: new modules may be written with only a small code overhead. aa has been used by more than 50 researchers in hundreds of neuroimaging studies comprising thousands of subjects. It has been found to be robust, fast, and efficient, for simple-single subject studies up to multimodal pipelines on hundreds of subjects. It is attractive to both novice and experienced users. aa can reduce the amount of time neuroimaging laboratories spend performing analyses and reduce errors, expanding the range of scientific questions it is practical to address.
Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.
You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.
If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.
Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:
You can save any searches you perform for quick access to later from here.
We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.
If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.
Here are the facets that you can filter your papers by.
From here we'll present any options for the literature, such as exporting your current results.
If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.
Year:
Count: