Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 1 showing 1 ~ 20 papers out of 2,468 papers

Music in Noise: Neural Correlates Underlying Noise Tolerance in Music-Induced Emotion.

  • Shota Murai‎ et al.
  • Cerebral cortex communications‎
  • 2021‎

Music can be experienced in various acoustic qualities. In this study, we investigated how the acoustic quality of the music can influence strong emotional experiences, such as musical chills, and the neural activity. The music's acoustic quality was controlled by adding noise to musical pieces. Participants listened to clear and noisy musical pieces and pressed a button when they experienced chills. We estimated neural activity in response to chills under both clear and noisy conditions using functional magnetic resonance imaging (fMRI). The behavioral data revealed that compared with the clear condition, the noisy condition dramatically decreased the number of chills and duration of chills. The fMRI results showed that under both noisy and clear conditions the supplementary motor area, insula, and superior temporal gyrus were similarly activated when participants experienced chills. The involvement of these brain regions may be crucial for music-induced emotional processes under the noisy as well as the clear condition. In addition, we found a decrease in the activation of the right superior temporal sulcus when experiencing chills under the noisy condition, which suggests that music-induced emotional processing is sensitive to acoustic quality.


Music Therapy and Other Music-Based Interventions in Pediatric Health Care: An Overview.

  • Thomas Stegemann‎ et al.
  • Medicines (Basel, Switzerland)‎
  • 2019‎

Background: In pediatric health care, non-pharmacological interventions such as music therapy have promising potential to complement traditional medical treatment options in order to facilitate recovery and well-being. Music therapy and other music-based interventions are increasingly applied in the clinical treatment of children and adolescents in many countries world-wide. The purpose of this overview is to examine the evidence regarding the effectiveness of music therapy and other music-based interventions as applied in pediatric health care. Methods: Surveying recent literature and summarizing findings from systematic reviews, this overview covers selected fields of application in pediatric health care (autism spectrum disorder; disability; epilepsy; mental health; neonatal care; neurorehabilitation; pain, anxiety and stress in medical procedures; pediatric oncology and palliative care) and discusses the effectiveness of music interventions in these areas. Results: Findings show that there is a growing body of evidence regarding the beneficial effects of music therapy, music medicine, and other music-based interventions for children and adolescents, although more rigorous research is still needed. The highest quality of evidence for the positive effects of music therapy is available in the fields of autism spectrum disorder and neonatal care. Conclusions: Music therapy can be considered a safe and generally well-accepted intervention in pediatric health care to alleviate symptoms and improve quality of life. As an individualized intervention that is typically provided in a person-centered way, music therapy is usually easy to implement into clinical practices. However, it is important to note that to exploit the potential of music therapy in an optimal way, specialized academic and clinical training and careful selection of intervention techniques to fit the needs of the client are essential.


Mind as music.

  • Dan Lloyd‎
  • Frontiers in psychology‎
  • 2011‎

Cognitive neuroscience typically develops hypotheses to explain phenomena that are localized in space and time. Specific regions of the brain execute characteristic functions, whose causes and effects are prompt; determining these functions in spatial and temporal isolation is generally regarded as the first step toward understanding the coherent operation of the whole brain over time. In other words, if the task of cognitive neuroscience is to interpret the neural code, then the first step has been semantic, searching for the meanings (functions) of localized elements, prior to exploring neural syntax, the mutual constraints among elements synchronically and diachronically. While neuroscience has made great strides in discovering the functions of regions of the brain, less is known about the dynamic patterns of brain activity over time, in particular, whether regions activate in sequences that could be characterized syntactically. Researchers generally assume that neural semantics is a precondition for determining neural syntax. Furthermore, it is often assumed that the syntax of the brain is too complex for our present technology and understanding. A corollary of this view holds that functional MRI (fMRI) lacks the spatial and temporal resolution needed to identify the dynamic syntax of neural computation. This paper examines these assumptions with a novel analysis of fMRI image series, resting on the conjecture that any computational code will exhibit aggregate features that can be detected even if the meaning of the code is unknown. Specifically, computational codes will be sparse or dense in different degrees. A sparse code is one that uses only a few of the many possible patterns of activity (in the brain) or symbols (in a human-made code). Considering sparseness at different scales and as measured by different techniques, this approach clearly distinguishes two conventional coding systems, namely, language and music. Based on an analysis of 99 subjects in three different fMRI protocols, in comparison with 194 musical examples and 700 language passages, it is observed that fMRI activity is more similar to music than it is to language, as measured over single symbols, as well as symbol combinations in pairs and triples. Tools from cognitive musicology may therefore be useful in characterizing the brain as a dynamical system.


Scaling behaviour in music and cortical dynamics interplay to mediate music listening pleasure.

  • Ana Filipa Teixeira Borges‎ et al.
  • Scientific reports‎
  • 2019‎

The pleasure of music listening regulates daily behaviour and promotes rehabilitation in healthcare. Human behaviour emerges from the modulation of spontaneous timely coordinated neuronal networks. Too little is known about the physical properties and neurophysiological underpinnings of music to understand its perception, its health benefit and to deploy personalized or standardized music-therapy. Prior studies revealed how macroscopic neuronal and music patterns scale with frequency according to a 1/fα relationship, where a is the scaling exponent. Here, we examine how this hallmark in music and neuronal dynamics relate to pleasure. Using electroencephalography, electrocardiography and behavioural data in healthy subjects, we show that music listening decreases the scaling exponent of neuronal activity and-in temporal areas-this change is linked to pleasure. Default-state scaling exponents of the most pleased individuals were higher and approached those found in music loudness fluctuations. Furthermore, the scaling in selective regions and timescales and the average heart rate were largely proportional to the scaling of the melody. The scaling behaviour of heartbeat and neuronal fluctuations were associated during music listening. Our results point to a 1/f resonance between brain and music and a temporal rescaling of neuronal activity in the temporal cortex as mechanisms underlying music appreciation.


NIH Music-Based Intervention Toolkit: Music-Based Interventions for Brain Disorders of Aging.

  • Emmeline Edwards‎ et al.
  • Neurology‎
  • 2023‎

Music-based interventions (MBIs) show promise for managing symptoms of various brain disorders. To fully realize the potential of MBIs and dispel the outdated misconception that MBIs are rooted in soft science, the NIH is promoting rigorously designed, well-powered MBI clinical trials. The pressing need of guidelines for scientifically rigorous studies with enhanced data collection brought together the Renée Fleming Foundation, the Foundation for the NIH, the Trans-NIH Music and Health Working Group, and an interdisciplinary scientific expert panel to create the NIH MBI Toolkit for research on music and health across the lifespan. The Toolkit defines the building blocks of MBIs, including a consolidated set of common data elements for MBI protocols, and core datasets of outcome measures and biomarkers for brain disorders of aging that researchers may select for their studies. Utilization of the guiding principles in this Toolkit will be strongly recommended for NIH-funded studies of MBIs.


Gender and Music Composition: A Study of Music, and the Gendering of Meanings.

  • Desmond C Sergeant‎ et al.
  • Frontiers in psychology‎
  • 2016‎

In this study claims that music communicates gendered meanings are considered, and relevant literature is reviewed. We first discuss the nature of meaning in music, and how it is constructed and construed. Examples of statements of gendering in the literature are cited, and the problems identified by writers who have questioned their validity are considered. We examine the concepts underlying terminology that has been used in inconsistent and contradictory ways. Three hypotheses are posed, and tested by means of two listening tasks. Results are presented that indicate that gendering is not inherent in musical structures, but is contributed to the perceptual event by the listener.


Everyday music in infancy.

  • Jennifer K Mendoza‎ et al.
  • Developmental science‎
  • 2021‎

Infants enculturate to their soundscape over the first year of life, yet theories of how they do so rarely make contact with details about the sounds available in everyday life. Here, we report on properties of a ubiquitous early ecology in which foundational skills get built: music. We captured daylong recordings from 35 infants ages 6-12 months at home and fully double-coded 467 h of everyday sounds for music and its features, tunes, and voices. Analyses of this first-of-its-kind corpus revealed two distributional properties of infants' everyday musical ecology. First, infants encountered vocal music in over half, and instrumental in over three-quarters, of everyday music. Live sources generated one-third, and recorded sources three-quarters, of everyday music. Second, infants did not encounter each individual tune and voice in their day equally often. Instead, the most available identity cumulated to many more seconds of the day than would be expected under a uniform distribution. These properties of everyday music in human infancy are different from what is discoverable in environments highly constrained by context (e.g., laboratories) and time (e.g., minutes rather than hours). Together with recent insights about the everyday motor, language, and visual ecologies of infancy, these findings reinforce an emerging priority to build theories of development that address the opportunities and challenges of real input encountered by real learners.


Cognitive interference can be mitigated by consonant music and facilitated by dissonant music.

  • Nobuo Masataka‎ et al.
  • Scientific reports‎
  • 2013‎

Debates on the origins of consonance and dissonance in music have a long history. While some scientists argue that consonance judgments are an acquired competence based on exposure to the musical-system-specific knowledge of a particular culture, others favor a biological explanation for the observed preference for consonance. Here we provide experimental confirmation that this preference plays an adaptive role in human cognition: it reduces cognitive interference. The results of our experiment reveal that exposure to a Mozart minuet mitigates interference, whereas, conversely, when the music is modified to consist of mostly dissonant intervals the interference effect is intensified.


Exit Music: The Experience of Music Therapy within Medical Assistance in Dying.

  • SarahRose Black‎ et al.
  • Healthcare (Basel, Switzerland)‎
  • 2020‎

Since the 2015 Canadian legalization of medical assistance in dying (MAiD), many Canadian music therapists have become involved in the care of those requesting this procedure. This qualitative study, the first of its kind, examines the experience of music therapy within MAiD, exploring lived experience from three perspectives: the patient, their primary caregiver, and the music therapist/researcher. Overall thematic findings of a hermeneutic phenomenological analysis of ten MAiD cases demonstrate therapeutically beneficial outcomes in terms of quality of life, symptom management, and life review. Further research is merited to continue an exploration of the role of music therapy in the context of assisted dying.


Investigating country-specific music preferences and music recommendation algorithms with the LFM-1b dataset.

  • Markus Schedl‎
  • International journal of multimedia information retrieval‎
  • 2017‎

Recently, the LFM-1b dataset has been proposed to foster research and evaluation in music retrieval and music recommender systems, Schedl (Proceedings of the ACM International Conference on Multimedia Retrieval (ICMR). New York, 2016). It contains more than one billion music listening events created by more than 120,000 users of Last.fm. Each listening event is characterized by artist, album, and track name, and further includes a timestamp. Basic demographic information and a selection of more elaborate listener-specific descriptors are included as well, for anonymized users. In this article, we reveal information about LFM-1b's acquisition and content and we compare it to existing datasets. We furthermore provide an extensive statistical analysis of the dataset, including basic properties of the item sets, demographic coverage, distribution of listening events (e.g., over artists and users), and aspects related to music preference and consumption behavior (e.g., temporal features and mainstreaminess of listeners). Exploiting country information of users and genre tags of artists, we also create taste profiles for populations and determine similar and dissimilar countries in terms of their populations' music preferences. Finally, we illustrate the dataset's usage in a simple artist recommendation task, whose results are intended to serve as baseline against which more elaborate techniques can be assessed.


The language of music: Common neural codes for structured sequences in music and natural language.

  • Jeffrey N Chiang‎ et al.
  • Brain and language‎
  • 2018‎

The ability to process structured sequences is a central feature of natural language but also characterizes many other domains of human cognition. In this fMRI study, we measured brain metabolic response in musicians as they generated structured and non-structured sequences in language and music. We employed a univariate and multivariate cross-classification approach to provide evidence that a common neural code underlies the production of structured sequences across the two domains. Crucially, the common substrate includes Broca's area, a region well known for processing structured sequences in language. These findings have several implications. First, they directly support the hypothesis that language and music share syntactic integration mechanisms. Second, they show that Broca's area is capable of operating supramodally across these two domains. Finally, these results dismiss the recent hypothesis that domain general processes of neighboring neural substrates explain the previously observed "overlap" between neuroimaging activations across the two domains.


Decomposing Complexity Preferences for Music.

  • Yaǧmur Güçlütürk‎ et al.
  • Frontiers in psychology‎
  • 2019‎

Recently, we demonstrated complexity as a major factor for explaining individual differences in visual preferences for abstract digital art. We have shown that participants could best be separated into two groups based on their liking ratings for abstract digital art comprising geometric patterns: one group with a preference for complex visual patterns and another group with a preference for simple visual patterns. In the present study, building up on these results, we extended our investigations for complexity preferences from highly controlled visual stimuli to ecologically valid stimuli in the auditory modality. Similar to visual preferences, we showed that music preferences are highly influenced by stimulus complexity. We demonstrated this by clustering a large number of participants based on their liking ratings for song excerpts from various musical genres. Our results show that, based on their liking ratings, participants can best be separated into two groups: one group with a preference for more complex songs and another group with a preference for simpler songs. Finally, we considered various demographic and personal characteristics to explore differences between the groups, and reported that at least for the current data set age and gender to be significant factors separating the two groups.


Protein music of enhanced musicality by music style guided exploration of diverse amino acid properties.

  • Nicole WanNi Tay‎ et al.
  • Heliyon‎
  • 2021‎

Inspired by the traceable analogies between protein sequences and music notes, protein music has been composed from amino acid sequences for popularizing science and sourcing melodies. Despite the continuous development of protein-to-music algorithms, the musicality of protein music lags far behind human music. Musicality may be enhanced by fine-tuned protein-to-music mapping to the features of a specific music style. We analyzed the features of a music style (Fantasy-Impromptu style), and used the quantized musical features to guide broad exploration of diverse amino acid properties (104 properties, sequence patterns and variations) for developing a novel protein-to-music algorithm of enhanced musicality. This algorithm was applied to 18 proteins of various biological functions. The derived music pieces consistently exhibited enhanced musicality with respect to existing protein music. Music style guided exploration of diverse amino acid properties enable protein music composition of enhanced musicality, which may be further developed and applied to a wider variety of music styles.


Adapting to the Sound of Music - Development of Music Discrimination Skills in Recently Implanted CI Users.

  • Alberte B Seeberg‎ et al.
  • Trends in hearing‎
  • 2023‎

Cochlear implants (CIs) are optimized for speech perception but poor in conveying musical sound features such as pitch, melody, and timbre. Here, we investigated the early development of discrimination of musical sound features after cochlear implantation. Nine recently implanted CI users (CIre) were tested shortly after switch-on (T1) and approximately 3 months later (T2), using a musical multifeature mismatch negativity (MMN) paradigm, presenting four deviant features (intensity, pitch, timbre, and rhythm), and a three-alternative forced-choice behavioral test. For reference, groups of experienced CI users (CIex; n = 13) and normally hearing (NH) controls (n = 14) underwent the same tests once. We found significant improvement in CIre's neural discrimination of pitch and timbre as marked by increased MMN amplitudes. This was not reflected in the behavioral results. Behaviorally, CIre scored well above chance level at both time points for all features except intensity, but significantly below NH controls for all features except rhythm. Both CI groups scored significantly below NH in behavioral pitch discrimination. No significant difference was found in MMN amplitude between CIex and NH. The results indicate that development of musical discrimination can be detected neurophysiologically early after switch-on. However, to fully take advantage of the sparse information from the implant, a prolonged adaptation period may be required. Behavioral discrimination accuracy was notably high already shortly after implant switch-on, although well below that of NH listeners. This study provides new insight into the early development of music-discrimination abilities in CI users and may have clinical and therapeutic relevance.


How Live Music Moves Us: Head Movement Differences in Audiences to Live Versus Recorded Music.

  • Dana Swarbrick‎ et al.
  • Frontiers in psychology‎
  • 2018‎

A live music concert is a pleasurable social event that is among the most visceral and memorable forms of musical engagement. But what inspires listeners to attend concerts, sometimes at great expense, when they could listen to recordings at home? An iconic aspect of popular concerts is engaging with other audience members through moving to the music. Head movements, in particular, reflect emotion and have social consequences when experienced with others. Previous studies have explored the affiliative social engagement experienced among people moving together to music. But live concerts have other features that might also be important, such as that during a live performance the music unfolds in a unique and not predetermined way, potentially increasing anticipation and feelings of involvement for the audience. Being in the same space as the musicians might also be exciting. Here we controlled for simply being in an audience to examine whether factors inherent to live performance contribute to the concert experience. We used motion capture to compare head movement responses at a live album release concert featuring Canadian rock star Ian Fletcher Thornley, and at a concert without the performers where the same songs were played from the recorded album. We also examined effects of a prior connection with the performers by comparing fans and neutral-listeners, while controlling for familiarity with the songs, as the album had not yet been released. Head movements were faster during the live concert than the album-playback concert. Self-reported fans moved faster and exhibited greater levels of rhythmic entrainment than neutral-listeners. These results indicate that live music engages listeners to a greater extent than pre-recorded music and that a pre-existing admiration for the performers also leads to higher engagement.


Comparative Efficacy of Active Group Music Intervention versus Group Music Listening in Alzheimer's Disease.

  • María Gómez-Gallego‎ et al.
  • International journal of environmental research and public health‎
  • 2021‎

Music interventions are promising therapies for the management of symptoms in Alzheimer's disease (AD). Globally, music interventions can be classified as active or receptive depending on the participation of the subjects. Active and receptive music tasks engage different brain areas that might result in distinctive clinical effects. This study aims to compare the clinical effects of two types of music interventions and a control activity.


Music Training and Education Slow the Deterioration of Music Perception Produced by Presbycusis in the Elderly.

  • Felipe N Moreno-Gómez‎ et al.
  • Frontiers in aging neuroscience‎
  • 2017‎

The perception of music depends on the normal function of the peripheral and central auditory system. Aged subjects without hearing loss have altered music perception, including pitch and temporal features. Presbycusis or age-related hearing loss is a frequent condition in elderly people, produced by neurodegenerative processes that affect the cochlear receptor cells and brain circuits involved in auditory perception. Clinically, presbycusis patients have bilateral high-frequency hearing loss and deteriorated speech intelligibility. Music impairments in presbycusis subjects can be attributed to the normal aging processes and to presbycusis neuropathological changes. However, whether presbycusis further impairs music perception remains controversial. Here, we developed a computerized version of the Montreal battery of evaluation of amusia (MBEA) and assessed music perception in 175 Chilean adults aged between 18 and 90 years without hearing complaints and in symptomatic presbycusis patients. We give normative data for MBEA performance in a Latin-American population, showing age and educational effects. In addition, we found that symptomatic presbycusis was the most relevant factor determining global MBEA accuracy in aged subjects. Moreover, we show that melodic impairments in presbycusis individuals were diminished by music training, while the performance in temporal tasks were affected by the educational level and music training. We conclude that music training and education are important factors as they can slow the deterioration of music perception produced by age-related hearing loss.


Mentalising music in frontotemporal dementia.

  • Laura E Downey‎ et al.
  • Cortex; a journal devoted to the study of the nervous system and behavior‎
  • 2013‎

Despite considerable recent interest, the biological basis and clinical diagnosis of behavioural variant frontotemporal dementia (bvFTD) pose unresolved problems. Mentalising (the cognitive capacity to interpret the behaviour of oneself and others in terms of mental states) is impaired as a prominent feature of bvFTD, consistent with involvement of brain regions including ventro-medial prefrontal cortex (PFC), orbitofrontal cortex and anterior temporal lobes. Here, we investigated mentalising ability in a cohort of patients with bvFTD using a novel modality: music. We constructed a novel neuropsychological battery requiring attribution of affective mental or non-mental associations to musical stimuli. Mentalising performance of patients with bvFTD (n = 20) was assessed in relation to matched healthy control subjects (n = 20); patients also had a comprehensive assessment of behaviour and general neuropsychological functions. Neuroanatomical correlates of performance on the experimental tasks were investigated using voxel-based morphometry of patients' brain magnetic resonance imaging (MRI) scans. Compared to healthy control subjects, patients showed impaired ability to attribute mental states but not non-mental characteristics to music, and this deficit correlated with performance on a standard test of social inference and with carer ratings of patients' empathic capacity, but not with other potentially relevant measures of general neuropsychological function. Mentalising performance in the bvFTD group was associated with grey matter changes in anterior temporal lobe and ventro-medial PFC. These findings suggest that music can represent surrogate mental states and the ability to construct such mental representations is impaired in bvFTD, with potential implications for our understanding of the biology of bvFTD and human social cognition more broadly.


Creating Music With Fuzzy Logic.

  • Rodrigo F Cádiz‎
  • Frontiers in artificial intelligence‎
  • 2020‎

Fuzzy logic is an artificial intelligence technique that has applications in many areas, due to its importance in handling uncertain inputs. Despite the great recent success of other branches of AI, such as deep neural networks, fuzzy logic is still a very powerful machine learning technique, based on expert reasoning, that can be of help in many areas of musical creativity, such as composing music, synthesizing sounds, gestural mappings in electronic instruments, parametric control of sound synthesis, audiovisual content generation or sonification. We propose that fuzzy logic is a very suitable framework for thinking and operating not only with sound and acoustic signals but also with symbolic representations of music. In this article, we discuss the application of fuzzy logic ideas to music, introduce the Fuzzy Logic Control Toolkit, a set of tools to use fuzzy logic inside the MaxMSP real-time sound synthesis environment, and show how some fuzzy logic concepts can be used and incorporated into fields, such as algorithmic composition, sound synthesis and parametric control of computer music. Finally, we discuss the composition of Incerta, an acousmatic multichannel composition as a concrete example of the application of fuzzy concepts to musical creation.


Music Preferences and Personality in Brazilians.

  • Lucia Herrera‎ et al.
  • Frontiers in psychology‎
  • 2018‎

This article analyzes the relationship between musical preference and type of personality in a large group of Brazilian young and adult participants (N = 1050). The study included 25 of 27 states of Brazil and individuals aged between 16 and 71 years (M = 30.87; SD = 10.50). Of these, 500 were male (47.6%) and 550 were female (52.4%). A correlational study was carried out applying two online questionnaires with quality parameters (content-construct validity and reliability), one on musical preference and the other on personality. The results indicate four main findings: (1) the musical listening of the participants is limited to a reduced number of styles, mainly Pop music and others, typical of Brazilian culture; (2) the Brazilian context supposes a determining aspect in the low preference of non-Brazilian music; (3) there is a positive correlation between most personality types analyzed and the Latin, Brazilian, Classical and Ethnic musical styles. A negative correlation between these types of personality and the consumption of Rock music was also observed; (4) musical preferences are driven not only by personality but in some cases they are also driven by socio-demographic variables (i.e., age and gender). Likewise, this work shows how participants make use of music in personality aspects that may be of interest for the analysis of socio-affective behavior (personality) as well as according to different socio-demographic variables (e.g., age and gender). More cross-cultural research on musical preference and personality would need to be carried out from a global perspective, framed in the context of social psychology and studies of mass communication.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: