Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 5 showing 81 ~ 100 papers out of 39,141 papers

MOLGENIS/connect: a system for semi-automatic integration of heterogeneous phenotype data with applications in biobanks.

  • Chao Pang‎ et al.
  • Bioinformatics (Oxford, England)‎
  • 2016‎

While the size and number of biobanks, patient registries and other data collections are increasing, biomedical researchers still often need to pool data for statistical power, a task that requires time-intensive retrospective integration.


Osprey: Open-source processing, reconstruction & estimation of magnetic resonance spectroscopy data.

  • Georg Oeltzschner‎ et al.
  • Journal of neuroscience methods‎
  • 2020‎

Processing and quantitative analysis of magnetic resonance spectroscopy (MRS) data are far from standardized and require interfacing with third-party software. Here, we present Osprey, a fully integrated open-source data analysis pipeline for MRS data, with seamless integration of pre-processing, linear-combination modelling, quantification, and data visualization.


Automatic auditory processing features in distinct subtypes of patients at clinical high risk for psychosis: Forecasting remission with mismatch negativity.

  • GuiSen Wu‎ et al.
  • Human brain mapping‎
  • 2022‎

Individuals at clinical high risk (CHR) for psychosis exhibit a compromised mismatch negativity (MMN) response, which indicates dysfunction of pre-attentive deviance processing. Event-related potential and time-frequency (TF) information, in combination with clinical and cognitive profiles, may provide insight into the pathophysiology and psychopathology of the CHR stage and predict the prognosis of CHR individuals. A total of 92 individuals with CHR were recruited and followed up regularly for up to 3 years. Individuals with CHR were classified into three clinical subtypes demonstrated previously, specifically 28 from Cluster 1 (characterized by extensive negative symptoms and cognitive deficits), 31 from Cluster 2 (characterized by thought and behavioral disorganization, with moderate cognitive impairment), and 33 from Cluster 3 (characterized by the mildest symptoms and cognitive deficits). Auditory MMN to frequency and duration deviants was assessed. The event-related spectral perturbation (ERSP) and inter-trial coherence (ITC) were acquired using TF analysis. Predictive indices for remission were identified using logistic regression analyses. As expected, reduced frequency MMN (fMMN) and duration MMN (dMMN) responses were noted in Cluster 1 relative to the other two clusters. In the TF analysis, Cluster 1 showed decreased theta and alpha ITC in response to deviant stimuli. The regression analyses revealed that dMMN latency and alpha ERSP to duration deviants, theta ITC to frequency deviants and alpha ERSP to frequency deviants, and fMMN latency were significant MMN predictors of remission for the three clusters. MMN variables outperformed behavioral variables in predicting remission of Clusters 1 and 2. Our findings indicate relatively disrupted automatic auditory processing in a certain CHR subtype and a close affinity between these electrophysiological indexes and clinical profiles within different clusters. Furthermore, MMN indexes may serve as predictors of subsequent remission from the CHR state. These findings suggest that the auditory MMN response is a potential neurophysiological marker for distinct clinical subtypes of CHR.


Automatic segmentation of trabecular and cortical compartments in HR-pQCT images using an embedding-predicting U-Net and morphological post-processing.

  • Nathan J Neeteson‎ et al.
  • Scientific reports‎
  • 2023‎

High-resolution peripheral quantitative computed tomography (HR-pQCT) is an emerging in vivo imaging modality for quantification of bone microarchitecture. However, extraction of quantitative microarchitectural parameters from HR-pQCT images requires an accurate segmentation of the image. The current standard protocol using semi-automated contouring for HR-pQCT image segmentation is laborious, introduces inter-operator biases into research data, and poses a barrier to streamlined clinical implementation. In this work, we propose and validate a fully automated algorithm for segmentation of HR-pQCT radius and tibia images. A multi-slice 2D U-Net produces initial segmentation predictions, which are post-processed via a sequence of traditional morphological image filters. The U-Net was trained on a large dataset containing 1822 images from 896 unique participants. Predicted segmentations were compared to reference segmentations on a disjoint dataset containing 386 images from 190 unique participants, and 156 pairs of repeated images were used to compare the precision of the novel and current protocols. The agreement of morphological parameters obtained using the predicted segmentation relative to the reference standard was excellent (R2 between 0.938 and > 0.999). Precision was significantly improved for several outputs, most notably cortical porosity. This novel and robust algorithm for automated segmentation will increase the feasibility of using HR-pQCT in research and clinical settings.


Transferability Based on Drug Structure Similarity in the Automatic Classification of Noncompliant Drug Use on Social Media: Natural Language Processing Approach.

  • Tomohiro Nishiyama‎ et al.
  • Journal of medical Internet research‎
  • 2023‎

Medication noncompliance is a critical issue because of the increased number of drugs sold on the web. Web-based drug distribution is difficult to control, causing problems such as drug noncompliance and abuse. The existing medication compliance surveys lack completeness because it is impossible to cover patients who do not go to the hospital or provide accurate information to their doctors, so a social media-based approach is being explored to collect information about drug use. Social media data, which includes information on drug usage by users, can be used to detect drug abuse and medication compliance in patients.


Subthalamic nucleus gamma oscillations mediate a switch from automatic to controlled processing: a study of random number generation in Parkinson's disease.

  • Anam Anzak‎ et al.
  • NeuroImage‎
  • 2013‎

In paced random number generation (RNG) participants are asked to generate numbers between 1 and 9 in a random fashion, in synchrony with a pacing stimulus. Successful task performance can be achieved through control of the main biases known to exist in human RNG compared to a computer generated series: seriation, cycling through a set of available numbers, and repetition avoidance. A role in response inhibition and switching from automatic to controlled processing has previously been ascribed to the subthalamic nucleus (STN). We sought evidence of frequency-specific changes in STN oscillatory activity which could be directly related to use of such strategies during RNG. Local field potentials (LFPs) were recorded from depth electrodes implanted in the STN of 7 patients (14 sides) with Parkinson's disease (PD), when patients were on dopaminergic medication. Patients were instructed to (1) generate a series of 100 numbers between 1 and 9 in a random fashion, and (2) undertake a control serial counting task, both in synchrony with a 0.5 Hz pacing stimulus. Significant increases in LFP power (p ≤ 0.05) across a narrow gamma frequency band (45-60 Hz) during RNG, compared to the control counting task, were observed. Further, the number of 'repeated pairs' (a decline in which reflects repetition avoidance bias in human RNG) was positively correlated with these gamma increases. We therefore suggest that STN gamma activity is relevant for controlled processing, in particular the active selection and repetition of the same number on successive trials. These results are consistent with a frequency-specific role of the STN in executive processes such as suppression of habitual responses and 'switching-on' of more controlled processing strategies.


Affective Flattening in Patients with Schizophrenia: Differential Association with Amygdala Response to Threat-Related Facial Expression under Automatic and Controlled Processing Conditions.

  • Christian Lindner‎ et al.
  • Psychiatry investigation‎
  • 2016‎

Early neuroimaging studies have demonstrated amygdala hypoactivation in schizophrenia but more recent research based on paradigms with minimal cognitive loads or examining automatic processing has observed amygdala hyperactivation. Hyperactivation was found to be related to affective flattening. In this study, amygdala responsivity to threat-related facial expression was investigated in patients as a function of automatic versus controlled processing and patients' flat affect.


Benefits of merging paired-end reads before pre-processing environmental metagenomics data.

  • Midhuna Immaculate Joseph Maran‎ et al.
  • Marine genomics‎
  • 2022‎

High throughput sequencing of environmental DNA has applications in biodiversity monitoring, taxa abundance estimation, understanding the dynamics of community ecology, and marine species studies and conservation. Environmental DNA, especially, marine eDNA, has a fast degradation rate. Aside from the good quality reads, the data could have a significant number of reads that fall slightly below the default PHRED quality threshold of 30 on sequencing. For quality control, trimming methods are employed, which generally precede the merging of the read pairs. However, in the case of eDNA, a significant percentage of reads within the acceptable quality score range are also dropped.


XCP-D: A Robust Pipeline for the post-processing of fMRI data.

  • Kahini Mehta‎ et al.
  • bioRxiv : the preprint server for biology‎
  • 2023‎

Functional neuroimaging is an essential tool for neuroscience research. Pre-processing pipelines produce standardized, minimally pre-processed data to support a range of potential analyses. However, post-processing is not similarly standardized. While several options for post-processing exist, they tend not to support output from disparate pre-processing pipelines, may have limited documentation, and may not follow BIDS best practices. Here we present XCP-D, which presents a solution to these issues. XCP-D is a collaborative effort between PennLINC at the University of Pennsylvania and the DCAN lab at the University at Minnesota. XCP-D uses an open development model on GitHub and incorporates continuous integration testing; it is distributed as a Docker container or Singularity image. XCP-D generates denoised BOLD images and functional derivatives from resting-state data in either NifTI or CIFTI files, following pre-processing with fMRIPrep, HCP, and ABCD-BIDS pipelines. Even prior to its official release, XCP-D has been downloaded >3,000 times from DockerHub. Together, XCP-D facilitates robust, scalable, and reproducible post-processing of fMRI data.


Data processing workflow for large-scale immune monitoring studies by mass cytometry.

  • Paulina Rybakowska‎ et al.
  • Computational and structural biotechnology journal‎
  • 2021‎

Mass cytometry is a powerful tool for deep immune monitoring studies. To ensure maximal data quality, a careful experimental and analytical design is required. However even in well-controlled experiments variability caused by either operator or instrument can introduce artifacts that need to be corrected or removed from the data. Here we present a data processing pipeline which ensures the minimization of experimental artifacts and batch effects, while improving data quality. Data preprocessing and quality controls are carried out using an R pipeline and packages like CATALYST for bead-normalization and debarcoding, flowAI and flowCut for signal anomaly cleaning, AOF for files quality control, flowClean and flowDensity for gating, CytoNorm for batch normalization and FlowSOM and UMAP for data exploration. As proper experimental design is key in obtaining good quality events, we also include the sample processing protocol used to generate the data. Both, analysis and experimental pipelines are easy to scale-up, thus the workflow presented here is particularly suitable for large-scale, multicenter, multibatch and retrospective studies.


Nipype: a flexible, lightweight and extensible neuroimaging data processing framework in python.

  • Krzysztof Gorgolewski‎ et al.
  • Frontiers in neuroinformatics‎
  • 2011‎

Current neuroimaging software offer users an incredible opportunity to analyze their data in different ways, with different underlying assumptions. Several sophisticated software packages (e.g., AFNI, BrainVoyager, FSL, FreeSurfer, Nipy, R, SPM) are used to process and analyze large and often diverse (highly multi-dimensional) data. However, this heterogeneous collection of specialized applications creates several issues that hinder replicable, efficient, and optimal use of neuroimaging analysis approaches: (1) No uniform access to neuroimaging analysis software and usage information; (2) No framework for comparative algorithm development and dissemination; (3) Personnel turnover in laboratories often limits methodological continuity and training new personnel takes time; (4) Neuroimaging software packages do not address computational efficiency; and (5) Methods sections in journal articles are inadequate for reproducing results. To address these issues, we present Nipype (Neuroimaging in Python: Pipelines and Interfaces; http://nipy.org/nipype), an open-source, community-developed, software package, and scriptable library. Nipype solves the issues by providing Interfaces to existing neuroimaging software with uniform usage semantics and by facilitating interaction between these packages using Workflows. Nipype provides an environment that encourages interactive exploration of algorithms, eases the design of Workflows within and between packages, allows rapid comparative development of algorithms and reduces the learning curve necessary to use different packages. Nipype supports both local and remote execution on multi-core machines and clusters, without additional scripting. Nipype is Berkeley Software Distribution licensed, allowing anyone unrestricted usage. An open, community-driven development philosophy allows the software to quickly adapt and address the varied needs of the evolving neuroimaging community, especially in the context of increasing demand for reproducible research.


Achieving Accurate Automatic Sleep Staging on Manually Pre-processed EEG Data Through Synchronization Feature Extraction and Graph Metrics.

  • Panteleimon Chriskos‎ et al.
  • Frontiers in human neuroscience‎
  • 2018‎

Sleep staging, the process of assigning labels to epochs of sleep, depending on the stage of sleep they belong, is an arduous, time consuming and error prone process as the initial recordings are quite often polluted by noise from different sources. To properly analyze such data and extract clinical knowledge, noise components must be removed or alleviated. In this paper a pre-processing and subsequent sleep staging pipeline for the sleep analysis of electroencephalographic signals is described. Two novel methods of functional connectivity estimation (Synchronization Likelihood/SL and Relative Wavelet Entropy/RWE) are comparatively investigated for automatic sleep staging through manually pre-processed electroencephalographic recordings. A multi-step process that renders signals suitable for further analysis is initially described. Then, two methods that rely on extracting synchronization features from electroencephalographic recordings to achieve computerized sleep staging are proposed, based on bivariate features which provide a functional overview of the brain network, contrary to most proposed methods that rely on extracting univariate time and frequency features. Annotation of sleep epochs is achieved through the presented feature extraction methods by training classifiers, which are in turn able to accurately classify new epochs. Analysis of data from sleep experiments on a randomized, controlled bed-rest study, which was organized by the European Space Agency and was conducted in the "ENVIHAB" facility of the Institute of Aerospace Medicine at the German Aerospace Center (DLR) in Cologne, Germany attains high accuracy rates, over 90% based on ground truth that resulted from manual sleep staging by two experienced sleep experts. Therefore, it can be concluded that the above feature extraction methods are suitable for semi-automatic sleep staging.


Objective Evaluation of Multiple Sclerosis Lesion Segmentation using a Data Management and Processing Infrastructure.

  • Olivier Commowick‎ et al.
  • Scientific reports‎
  • 2018‎

We present a study of multiple sclerosis segmentation algorithms conducted at the international MICCAI 2016 challenge. This challenge was operated using a new open-science computing infrastructure. This allowed for the automatic and independent evaluation of a large range of algorithms in a fair and completely automatic manner. This computing infrastructure was used to evaluate thirteen methods of MS lesions segmentation, exploring a broad range of state-of-theart algorithms, against a high-quality database of 53 MS cases coming from four centers following a common definition of the acquisition protocol. Each case was annotated manually by an unprecedented number of seven different experts. Results of the challenge highlighted that automatic algorithms, including the recent machine learning methods (random forests, deep learning, …), are still trailing human expertise on both detection and delineation criteria. In addition, we demonstrate that computing a statistically robust consensus of the algorithms performs closer to human expertise on one score (segmentation) although still trailing on detection scores.


Automatic Classification of Sub-Techniques in Classical Cross-Country Skiing Using a Machine Learning Algorithm on Micro-Sensor Data.

  • Ole Marius Hoel Rindal‎ et al.
  • Sensors (Basel, Switzerland)‎
  • 2017‎

The automatic classification of sub-techniques in classical cross-country skiing provides unique possibilities for analyzing the biomechanical aspects of outdoor skiing. This is currently possible due to the miniaturization and flexibility of wearable inertial measurement units (IMUs) that allow researchers to bring the laboratory to the field. In this study, we aimed to optimize the accuracy of the automatic classification of classical cross-country skiing sub-techniques by using two IMUs attached to the skier's arm and chest together with a machine learning algorithm. The novelty of our approach is the reliable detection of individual cycles using a gyroscope on the skier's arm, while a neural network machine learning algorithm robustly classifies each cycle to a sub-technique using sensor data from an accelerometer on the chest. In this study, 24 datasets from 10 different participants were separated into the categories training-, validation- and test-data. Overall, we achieved a classification accuracy of 93.9% on the test-data. Furthermore, we illustrate how an accurate classification of sub-techniques can be combined with data from standard sports equipment including position, altitude, speed and heart rate measuring systems. Combining this information has the potential to provide novel insight into physiological and biomechanical aspects valuable to coaches, athletes and researchers.


The Mediation Role of Dynamic Multisensory Processing Using Molecular Genetic Data in Dyslexia.

  • Sara Mascheretti‎ et al.
  • Brain sciences‎
  • 2020‎

Although substantial heritability has been reported and candidate genes have been identified, we are far from understanding the etiopathogenetic pathways underlying developmental dyslexia (DD). Reading-related endophenotypes (EPs) have been established. Until now it was unknown whether they mediated the pathway from gene to reading (dis)ability. Thus, in a sample of 223 siblings from nuclear families with DD and 79 unrelated typical readers, we tested four EPs (i.e., rapid auditory processing, rapid automatized naming, multisensory nonspatial attention and visual motion processing) and 20 markers spanning five DD-candidate genes (i.e., DYX1C1, DCDC2, KIAA0319, ROBO1 and GRIN2B) using a multiple-predictor/multiple-mediator framework. Our results show that rapid auditory and visual motion processing are mediators in the pathway from ROBO1-rs9853895 to reading. Specifically, the T/T genotype group predicts impairments in rapid auditory and visual motion processing which, in turn, predict poorer reading skills. Our results suggest that ROBO1 is related to reading via multisensory temporal processing. These findings support the use of EPs as an effective approach to disentangling the complex pathways between candidate genes and behavior.


Automatic Detection of Faults in Race Walking: A Comparative Analysis of Machine-Learning Algorithms Fed with Inertial Sensor Data.

  • Juri Taborri‎ et al.
  • Sensors (Basel, Switzerland)‎
  • 2019‎

The validity of results in race walking is often questioned due to subjective decisions in the detection of faults. This study aims to compare machine-learning algorithms fed with data gathered from inertial sensors placed on lower-limb segments to define the best-performing classifiers for the automatic detection of illegal steps. Eight race walkers were enrolled and linear accelerations and angular velocities related to pelvis, thighs, shanks, and feet were acquired by seven inertial sensors. The experimental protocol consisted of two repetitions of three laps of 250 m, one performed with regular race walking, one with loss-of-contact faults, and one with knee-bent faults. The performance of 108 classifiers was evaluated in terms of accuracy, recall, precision, F1-score, and goodness index. Generally, linear accelerations revealed themselves as more characteristic with respect to the angular velocities. Among classifiers, those based on the support vector machine (SVM) were the most accurate. In particular, the quadratic SVM fed with shank linear accelerations was the best-performing classifier, with an F1-score and a goodness index equal to 0.89 and 0.11, respectively. The results open the possibility of using a wearable device for automatic detection of faults in race walking competition.


A Web-Based Tool for Automatic Data Collection, Curation, and Visualization of Complex Healthcare Survey Studies including Social Network Analysis.

  • José Alberto Benítez‎ et al.
  • Computational and mathematical methods in medicine‎
  • 2017‎

There is a great concern nowadays regarding alcohol consumption and drug abuse, especially in young people. Analyzing the social environment where these adolescents are immersed, as well as a series of measures determining the alcohol abuse risk or personal situation and perception using a number of questionnaires like AUDIT, FAS, KIDSCREEN, and others, it is possible to gain insight into the current situation of a given individual regarding his/her consumption behavior. But this analysis, in order to be achieved, requires the use of tools that can ease the process of questionnaire creation, data gathering, curation and representation, and later analysis and visualization to the user. This research presents the design and construction of a web-based platform able to facilitate each of the mentioned processes by integrating the different phases into an intuitive system with a graphical user interface that hides the complexity underlying each of the questionnaires and techniques used and presenting the results in a flexible and visual way, avoiding any manual handling of data during the process. Advantages of this approach are shown and compared to the previous situation where some of the tasks were accomplished by time consuming and error prone manipulations of data.


Automatic Classification of the Korean Triage Acuity Scale in Simulated Emergency Rooms Using Speech Recognition and Natural Language Processing: a Proof of Concept Study.

  • Dongkyun Kim‎ et al.
  • Journal of Korean medical science‎
  • 2021‎

Rapid triage reduces the patients' stay time at an emergency department (ED). The Korean Triage Acuity Scale (KTAS) is mandatorily applied at EDs in South Korea. For rapid triage, we studied machine learning-based triage systems composed of a speech recognition model and natural language processing-based classification.


A bio-inspired convolution neural network architecture for automatic breast cancer detection and classification using RNA-Seq gene expression data.

  • Tehnan I A Mohamed‎ et al.
  • Scientific reports‎
  • 2023‎

Breast cancer is considered one of the significant health challenges and ranks among the most prevalent and dangerous cancer types affecting women globally. Early breast cancer detection and diagnosis are crucial for effective treatment and personalized therapy. Early detection and diagnosis can help patients and physicians discover new treatment options, provide a more suitable quality of life, and ensure increased survival rates. Breast cancer detection using gene expression involves many complexities, such as the issue of dimensionality and the complicatedness of the gene expression data. This paper proposes a bio-inspired CNN model for breast cancer detection using gene expression data downloaded from the cancer genome atlas (TCGA). The data contains 1208 clinical samples of 19,948 genes with 113 normal and 1095 cancerous samples. In the proposed model, Array-Array Intensity Correlation (AAIC) is used at the pre-processing stage for outlier removal, followed by a normalization process to avoid biases in the expression measures. Filtration is used for gene reduction using a threshold value of 0.25. Thereafter the pre-processed gene expression dataset was converted into images which were later converted to grayscale to meet the requirements of the model. The model also uses a hybrid model of CNN architecture with a metaheuristic algorithm, namely the Ebola Optimization Search Algorithm (EOSA), to enhance the detection of breast cancer. The traditional CNN and five hybrid algorithms were compared with the classification result of the proposed model. The competing hybrid algorithms include the Whale Optimization Algorithm (WOA-CNN), the Genetic Algorithm (GA-CNN), the Satin Bowerbird Optimization (SBO-CNN), the Life Choice-Based Optimization (LCBO-CNN), and the Multi-Verse Optimizer (MVO-CNN). The results show that the proposed model determined the classes with high-performance measurements with an accuracy of 98.3%, a precision of 99%, a recall of 99%, an f1-score of 99%, a kappa of 90.3%, a specificity of 92.8%, and a sensitivity of 98.9% for the cancerous class. The results suggest that the proposed method has the potential to be a reliable and precise approach to breast cancer detection, which is crucial for early diagnosis and personalized therapy.


RFI Artefacts Detection in Sentinel-1 Level-1 SLC Data Based On Image Processing Techniques.

  • Agnieszka Chojka‎ et al.
  • Sensors (Basel, Switzerland)‎
  • 2020‎

Interferometric Synthetic Aperture Radar (InSAR) data are often contaminated by Radio-Frequency Interference (RFI) artefacts that make processing them more challenging. Therefore, easy to implement techniques for artefacts recognition have the potential to support the automatic Permanent Scatterers InSAR (PSInSAR) processing workflow during which faulty input data can lead to misinterpretation of the final outcomes. To address this issue, an efficient methodology was developed to mark images with RFI artefacts and as a consequence remove them from the stack of Synthetic Aperture Radar (SAR) images required in the PSInSAR processing workflow to calculate the ground displacements. Techniques presented in this paper for the purpose of RFI detection are based on image processing methods with the use of feature extraction involving pixel convolution, thresholding and nearest neighbor structure filtering. As the reference classifier, a convolutional neural network was used.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: