Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 1 showing 1 ~ 20 papers out of 631 papers

Delay of reinforcement versus rate of reinforcement in Pavlovian conditioning.

  • Joseph M Austen‎ et al.
  • Journal of experimental psychology. Animal learning and cognition‎
  • 2019‎

Conditioned stimulus (CS) duration is a determinant of conditioned responding, with increases in duration leading to reductions in response rates. The CS duration effect has been proposed to reflect sensitivity to the reinforcement rate across cumulative exposure to the CS, suggesting that the delay of reinforcement from the onset of the cue is not crucial. Here, we compared the effects of delay and rate of reinforcement on Pavlovian appetitive conditioning in mice. In Experiment 1, the influence of reinforcement delay on the timing of responding was removed by making the duration of cues variable across trials. Mice trained with variable duration cues were sensitive to differences in the rate of reinforcement to a similar extent as mice trained with fixed duration cues. Experiments 2 and 3 tested the independent effects of delay and reinforcement rate. In Experiment 2, food was presented at either the termination of the CS or during the CS. In Experiment 3, food occurred during the CS for all cues. The latter experiment demonstrated an effect of delay, but not reinforcement rate. Experiment 4 ruled out the possibility that the lack of effect of reinforcement rate in Experiment 3 was due to mice failing to learn about the nonreinforced CS exposure after the presentation of food within a trial. These results demonstrate that although the CS duration effect is not simply a consequence of timing of conditioned responses, it is dependent on the delay of reinforcement. The results provide a challenge to current associative and nonassociative, time-accumulation models of learning. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Hippocampal pattern separation supports reinforcement learning.

  • Ian C Ballard‎ et al.
  • Nature communications‎
  • 2019‎

Animals rely on learned associations to make decisions. Associations can be based on relationships between object features (e.g., the three leaflets of poison ivy leaves) and outcomes (e.g., rash). More often, outcomes are linked to multidimensional states (e.g., poison ivy is green in summer but red in spring). Feature-based reinforcement learning fails when the values of individual features depend on the other features present. One solution is to assign value to multi-featural conjunctive representations. Here, we test if the hippocampus forms separable conjunctive representations that enables the learning of response contingencies for stimuli of the form: AB+, B-, AC-, C+. Pattern analyses on functional MRI data show the hippocampus forms conjunctive representations that are dissociable from feature components and that these representations, along with those of cortex, influence striatal prediction errors. Our results establish a novel role for hippocampal pattern separation and conjunctive representation in reinforcement learning.


Orbitofrontal Circuits Control Multiple Reinforcement-Learning Processes.

  • Stephanie M Groman‎ et al.
  • Neuron‎
  • 2019‎

Adaptive decision making in dynamic environments requires multiple reinforcement-learning steps that may be implemented by dissociable neural circuits. Here, we used a novel directionally specific viral ablation approach to investigate the function of several anatomically defined orbitofrontal cortex (OFC) circuits during adaptive, flexible decision making in rats trained on a probabilistic reversal learning task. Ablation of OFC neurons projecting to the nucleus accumbens selectively disrupted performance following a reversal, by disrupting the use of negative outcomes to guide subsequent choices. Ablation of amygdala neurons projecting to the OFC also impaired reversal performance, but due to disruptions in the use of positive outcomes to guide subsequent choices. Ablation of OFC neurons projecting to the amygdala, by contrast, enhanced reversal performance by destabilizing action values. Our data are inconsistent with a unitary function of the OFC in decision making. Rather, distinct OFC-amygdala-striatal circuits mediate distinct components of the action-value updating and maintenance necessary for decision making.


Testing the reinforcement learning hypothesis of social conformity.

  • Marie Levorsen‎ et al.
  • Human brain mapping‎
  • 2021‎

Our preferences are influenced by the opinions of others. The past human neuroimaging studies on social conformity have identified a network of brain regions related to social conformity that includes the posterior medial frontal cortex (pMFC), anterior insula, and striatum. Since these brain regions are also known to play important roles in reinforcement learning (i.e., processing prediction error), it was previously hypothesized that social conformity and reinforcement learning have a common neural mechanism. However, although this view is currently widely accepted, these two processes have never been directly compared; therefore, the extent to which they shared a common neural mechanism had remained unclear. This study aimed to formally test the hypothesis. The same group of participants (n = 25) performed social conformity and reinforcement learning tasks inside a functional magnetic resonance imaging (fMRI) scanner. Univariate fMRI data analyses revealed activation overlaps in the pMFC and bilateral insula between social conflict and unsigned prediction error and in the striatum between social conflict and signed prediction error. We further conducted multivoxel pattern analysis (MVPA) for more direct evidence of a shared neural mechanism. MVPA did not reveal any evidence to support the hypothesis in any of these regions but found that activation patterns between social conflict and prediction error in these regions were largely distinct. Taken together, the present study provides no clear evidence of a common neural mechanism between social conformity and reinforcement learning.


Generalization of value in reinforcement learning by humans.

  • G Elliott Wimmer‎ et al.
  • The European journal of neuroscience‎
  • 2012‎

Research in decision-making has focused on the role of dopamine and its striatal targets in guiding choices via learned stimulus-reward or stimulus-response associations, behavior that is well described by reinforcement learning theories. However, basic reinforcement learning is relatively limited in scope and does not explain how learning about stimulus regularities or relations may guide decision-making. A candidate mechanism for this type of learning comes from the domain of memory, which has highlighted a role for the hippocampus in learning of stimulus-stimulus relations, typically dissociated from the role of the striatum in stimulus-response learning. Here, we used functional magnetic resonance imaging and computational model-based analyses to examine the joint contributions of these mechanisms to reinforcement learning. Humans performed a reinforcement learning task with added relational structure, modeled after tasks used to isolate hippocampal contributions to memory. On each trial participants chose one of four options, but the reward probabilities for pairs of options were correlated across trials. This (uninstructed) relationship between pairs of options potentially enabled an observer to learn about option values based on experience with the other options and to generalize across them. We observed blood oxygen level-dependent (BOLD) activity related to learning in the striatum and also in the hippocampus. By comparing a basic reinforcement learning model to one augmented to allow feedback to generalize between correlated options, we tested whether choice behavior and BOLD activity were influenced by the opportunity to generalize across correlated options. Although such generalization goes beyond standard computational accounts of reinforcement learning and striatal BOLD, both choices and striatal BOLD activity were better explained by the augmented model. Consistent with the hypothesized role for the hippocampus in this generalization, functional connectivity between the ventral striatum and hippocampus was modulated, across participants, by the ability of the augmented model to capture participants' choice. Our results thus point toward an interactive model in which striatal reinforcement learning systems may employ relational representations typically associated with the hippocampus.


Positive and negative reinforcement activate human auditory cortex.

  • Tina Weis‎ et al.
  • Frontiers in human neuroscience‎
  • 2013‎

Prior studies suggest that reward modulates neural activity in sensory cortices, but less is known about punishment. We used functional magnetic resonance imaging and an auditory discrimination task, where participants had to judge the duration of frequency modulated tones. In one session correct performance resulted in financial gains at the end of the trial, in a second session incorrect performance resulted in financial loss. Incorrect performance in the rewarded as well as correct performance in the punishment condition resulted in a neutral outcome. The size of gains and losses was either low or high (10 or 50 Euro cent) depending on the direction of frequency modulation. We analyzed neural activity at the end of the trial, during reinforcement, and found increased neural activity in auditory cortex when gaining a financial reward as compared to gaining no reward and when avoiding financial loss as compared to receiving a financial loss. This was independent on the size of gains and losses. A similar pattern of neural activity for both gaining a reward and avoiding a loss was also seen in right middle temporal gyrus, bilateral insula and pre-supplemental motor area, here however neural activity was lower after correct responses compared to incorrect responses. To summarize, this study shows that the activation of sensory cortices, as previously shown for gaining a reward is also seen during avoiding a loss.


Offline replay supports planning in human reinforcement learning.

  • Ida Momennejad‎ et al.
  • eLife‎
  • 2018‎

Making decisions in sequentially structured tasks requires integrating distally acquired information. The extensive computational cost of such integration challenges planning methods that integrate online, at decision time. Furthermore, it remains unclear whether 'offline' integration during replay supports planning, and if so which memories should be replayed. Inspired by machine learning, we propose that (a) offline replay of trajectories facilitates integrating representations that guide decisions, and (b) unsigned prediction errors (uncertainty) trigger such integrative replay. We designed a 2-step revaluation task for fMRI, whereby participants needed to integrate changes in rewards with past knowledge to optimally replan decisions. As predicted, we found that (a) multi-voxel pattern evidence for off-task replay predicts subsequent replanning; (b) neural sensitivity to uncertainty predicts subsequent replay and replanning; (c) off-task hippocampus and anterior cingulate activity increase when revaluation is required. These findings elucidate how the brain leverages offline mechanisms in planning and goal-directed behavior under uncertainty.


Sex differences in sucrose reinforcement in Long-Evans rats.

  • Jeffrey W Grimm‎ et al.
  • Biology of sex differences‎
  • 2022‎

There are sex differences in addiction behaviors. To develop a pre-clinical animal model to investigate this, the present study examined sex differences in sucrose taking and seeking using Long-Evans rats.


Emergence of belief-like representations through reinforcement learning.

  • Jay A Hennig‎ et al.
  • PLoS computational biology‎
  • 2023‎

To behave adaptively, animals must learn to predict future reward, or value. To do this, animals are thought to learn reward predictions using reinforcement learning. However, in contrast to classical models, animals must learn to estimate value using only incomplete state information. Previous work suggests that animals estimate value in partially observable tasks by first forming "beliefs"-optimal Bayesian estimates of the hidden states in the task. Although this is one way to solve the problem of partial observability, it is not the only way, nor is it the most computationally scalable solution in complex, real-world environments. Here we show that a recurrent neural network (RNN) can learn to estimate value directly from observations, generating reward prediction errors that resemble those observed experimentally, without any explicit objective of estimating beliefs. We integrate statistical, functional, and dynamical systems perspectives on beliefs to show that the RNN's learned representation encodes belief information, but only when the RNN's capacity is sufficiently large. These results illustrate how animals can estimate value in tasks without explicitly estimating beliefs, yielding a representation useful for systems with limited capacity.


Instructed motivational states bias reinforcement learning and memory formation.

  • Alyssa H Sinclair‎ et al.
  • Proceedings of the National Academy of Sciences of the United States of America‎
  • 2023‎

Motivation influences goals, decisions, and memory formation. Imperative motivation links urgent goals to actions, narrowing the focus of attention and memory. Conversely, interrogative motivation integrates goals over time and space, supporting rich memory encoding for flexible future use. We manipulated motivational states via cover stories for a reinforcement learning task: The imperative group imagined executing a museum heist, whereas the interrogative group imagined planning a future heist. Participants repeatedly chose among four doors, representing different museum rooms, to sample trial-unique paintings with variable rewards (later converted to bonus payments). The next day, participants performed a surprise memory test. Crucially, only the cover stories differed between the imperative and interrogative groups; the reinforcement learning task was identical, and all participants had the same expectations about how and when bonus payments would be awarded. In an initial sample and a preregistered replication, we demonstrated that imperative motivation increased exploitation during reinforcement learning. Conversely, interrogative motivation increased directed (but not random) exploration, despite the cost to participants' earnings. At test, the interrogative group was more accurate at recognizing paintings and recalling associated values. In the interrogative group, higher value paintings were more likely to be remembered; imperative motivation disrupted this effect of reward modulating memory. Overall, we demonstrate that a prelearning motivational manipulation can bias learning and memory, bearing implications for education, behavior change, clinical interventions, and communication.


Asymmetric and adaptive reward coding via normalized reinforcement learning.

  • Kenway Louie‎
  • PLoS computational biology‎
  • 2022‎

Learning is widely modeled in psychology, neuroscience, and computer science by prediction error-guided reinforcement learning (RL) algorithms. While standard RL assumes linear reward functions, reward-related neural activity is a saturating, nonlinear function of reward; however, the computational and behavioral implications of nonlinear RL are unknown. Here, we show that nonlinear RL incorporating the canonical divisive normalization computation introduces an intrinsic and tunable asymmetry in prediction error coding. At the behavioral level, this asymmetry explains empirical variability in risk preferences typically attributed to asymmetric learning rates. At the neural level, diversity in asymmetries provides a computational mechanism for recently proposed theories of distributional RL, allowing the brain to learn the full probability distribution of future rewards. This behavioral and computational flexibility argues for an incorporation of biologically valid value functions in computational models of learning and decision-making.


Asymmetric reinforcement learning facilitates human inference of transitive relations.

  • Simon Ciranka‎ et al.
  • Nature human behaviour‎
  • 2022‎

Humans and other animals are capable of inferring never-experienced relations (for example, A > C) from other relational observations (for example, A > B and B > C). The processes behind such transitive inference are subject to intense research. Here we demonstrate a new aspect of relational learning, building on previous evidence that transitive inference can be accomplished through simple reinforcement learning mechanisms. We show in simulations that inference of novel relations benefits from an asymmetric learning policy, where observers update only their belief about the winner (or loser) in a pair. Across four experiments (n = 145), we find substantial empirical support for such asymmetries in inferential learning. The learning policy favoured by our simulations and experiments gives rise to a compression of values that is routinely observed in psychophysics and behavioural economics. In other words, a seemingly biased learning strategy that yields well-known cognitive distortions can be beneficial for transitive inferential judgements.


Selective reinforcement of conflict processing in the Stroop task.

  • Arthur Prével‎ et al.
  • PloS one‎
  • 2021‎

Motivation signals have been shown to influence the engagement of cognitive control processes. However, most studies focus on the invigorating effect of reward prospect, rather than the reinforcing effect of reward feedback. The present study aimed to test whether people strategically adapt conflict processing when confronted with condition-specific congruency-reward contingencies in a manual Stroop task. Results show that the size of the Stroop effect can be affected by selectively rewarding responses following incongruent versus congruent trials. However, our findings also suggest important boundary conditions. Our first two experiments only show a modulation of the Stroop effect in the first half of the experimental blocks, possibly due to our adaptive threshold procedure demotivating adaptive behavior over time. The third experiment showed an overall modulation of the Stroop effect, but did not find evidence for a similar modulation on test items, leaving open whether this effect generalizes to the congruency conditions, or is stimulus-specific. More generally, our results are consistent with computational models of cognitive control and support contemporary learning perspectives on cognitive control. The findings also offer new guidelines and directions for future investigations on the selective reinforcement of cognitive control processes.


Synergy of Distinct Dopamine Projection Populations in Behavioral Reinforcement.

  • Gabriel Heymann‎ et al.
  • Neuron‎
  • 2020‎

Dopamine neurons of the ventral tegmental area (VTA) regulate reward association and motivation. It remains unclear whether there are distinct dopamine populations to mediate these functions. Using mouse genetics, we isolated two populations of dopamine-producing VTA neurons with divergent projections to the nucleus accumbens (NAc) core and shell. Inhibition of VTA-core-projecting neurons disrupted Pavlovian reward learning, and activation of these cells promoted the acquisition of an instrumental response. VTA-shell-projecting neurons did not regulate Pavlovian reward learning and could not facilitate acquisition of an instrumental response, but their activation could drive robust responding in a previously learned instrumental task. Both populations are activated simultaneously by cues, actions, and rewards, and this co-activation is required for robust reinforcement of behavior. Thus, there are functionally distinct dopamine populations in the VTA for promoting motivation and reward association, which operate on the same timescale to optimize behavioral reinforcement.


Human Choice Predicted by Obtained Reinforcers, Not by Reinforcement Predictors.

  • Jessica P Stagner‎ et al.
  • Frontiers in psychology‎
  • 2020‎

Macphail (1985) proposed that "intelligence" should not vary across vertebrate species when contextual variables are accounted for. Focusing on research involving choice behavior, the propensity for choosing an option that produces stimuli that predict the presence or absence of reinforcement but that also results in less food over time can be examined. This choice preference has been found multiple times in pigeons (Stagner and Zentall, 2010; Zentall and Stagner, 2011; Laude et al., 2014) and has been likened to gambling behavior demonstrated by humans (Zentall, 2014, 2016). The present experiments used a similarly structured task to examine adult human preferences for reinforcement predictors and compared findings to choice behavior demonstrated by children (Lalli et al., 2000), monkeys (Smith et al., 2017; Smith and Beran, 2020), dogs (Jackson et al., 2020), rats (Chow et al., 2017; Cunningham and Shahan, 2019; Jackson et al., 2020), and pigeons (Roper and Zentall, 1999; Stagner and Zentall, 2010). In Experiment 1, adult human participants showed no preference for reinforcement predictors. Results from Experiment 2 suggest that not only were reinforcement predictors not preferred, but that perhaps reinforcement predictors had no effect at all on choice behavior. Results from Experiments 1 and 2 were further assessed using a generalized matching equation, the findings from which support that adult human choice behavior in the present research was largely determined by reinforcement history. Overall, the present results obtained from human adult participants are different than those found from pigeons in particular, suggesting that further examination of Macphail (1985) hypothesis is warranted.


The relationship between reinforcement and explicit control during visuomotor adaptation.

  • Olivier Codol‎ et al.
  • Scientific reports‎
  • 2018‎

The motor system's ability to adapt to environmental changes is essential for maintaining accurate movements. Such adaptation recruits several distinct systems: cerebellar sensory-prediction error learning, success-based reinforcement, and explicit control. Although much work has focused on the relationship between cerebellar learning and explicit control, there is little research regarding how reinforcement and explicit control interact. To address this, participants first learnt a 20° visuomotor displacement. After reaching asymptotic performance, binary, hit-or-miss feedback (BF) was introduced either with or without visual feedback, the latter promoting reinforcement. Subsequently, retention was assessed using no-feedback trials, with half of the participants in each group being instructed to stop aiming off target. Although BF led to an increase in retention of the visuomotor displacement, instructing participants to stop re-aiming nullified this effect, suggesting explicit control is critical to BF-based reinforcement. In a second experiment, we prevented the expression or development of explicit control during BF performance, by either constraining participants to a short preparation time (expression) or by introducing the displacement gradually (development). Both manipulations strongly impaired BF performance, suggesting reinforcement requires both recruitment and expression of an explicit component. These results emphasise the pivotal role explicit control plays in reinforcement-based motor learning.


Dopamine regulates decision thresholds in human reinforcement learning in males.

  • Karima Chakroun‎ et al.
  • Nature communications‎
  • 2023‎

Dopamine fundamentally contributes to reinforcement learning, but recent accounts also suggest a contribution to specific action selection mechanisms and the regulation of response vigour. Here, we examine dopaminergic mechanisms underlying human reinforcement learning and action selection via a combined pharmacological neuroimaging approach in male human volunteers (n = 31, within-subjects; Placebo, 150 mg of the dopamine precursor L-dopa, 2 mg of the D2 receptor antagonist Haloperidol). We found little credible evidence for previously reported beneficial effects of L-dopa vs. Haloperidol on learning from gains and altered neural prediction error signals, which may be partly due to differences experimental design and/or drug dosages. Reinforcement learning drift diffusion models account for learning-related changes in accuracy and response times, and reveal consistent decision threshold reductions under both drugs, in line with the idea that lower dosages of D2 receptor antagonists increase striatal DA release via an autoreceptor-mediated feedback mechanism. These results are in line with the idea that dopamine regulates decision thresholds during reinforcement learning, and may help to bridge action selection and response vigor accounts of dopamine.


Sequential decisions: a computational comparison of observational and reinforcement accounts.

  • Nazanin Mohammadi Sepahvand‎ et al.
  • PloS one‎
  • 2014‎

Right brain damaged patients show impairments in sequential decision making tasks for which healthy people do not show any difficulty. We hypothesized that this difficulty could be due to the failure of right brain damage patients to develop well-matched models of the world. Our motivation is the idea that to navigate uncertainty, humans use models of the world to direct the decisions they make when interacting with their environment. The better the model is, the better their decisions are. To explore the model building and updating process in humans and the basis for impairment after brain injury, we used a computational model of non-stationary sequence learning. RELPH (Reinforcement and Entropy Learned Pruned Hypothesis space) was able to qualitatively and quantitatively reproduce the results of left and right brain damaged patient groups and healthy controls playing a sequential version of Rock, Paper, Scissors. Our results suggests that, in general, humans employ a sub-optimal reinforcement based learning method rather than an objectively better statistical learning approach, and that differences between right brain damaged and healthy control groups can be explained by different exploration policies, rather than qualitatively different learning mechanisms.


A reinforcement learning diffusion decision model for value-based decisions.

  • Laura Fontanesi‎ et al.
  • Psychonomic bulletin & review‎
  • 2019‎

Psychological models of value-based decision-making describe how subjective values are formed and mapped to single choices. Recently, additional efforts have been made to describe the temporal dynamics of these processes by adopting sequential sampling models from the perceptual decision-making tradition, such as the diffusion decision model (DDM). These models, when applied to value-based decision-making, allow mapping of subjective values not only to choices but also to response times. However, very few attempts have been made to adapt these models to situations in which decisions are followed by rewards, thereby producing learning effects. In this study, we propose a new combined reinforcement learning diffusion decision model (RLDDM) and test it on a learning task in which pairs of options differ with respect to both value difference and overall value. We found that participants became more accurate and faster with learning, responded faster and more accurately when options had more dissimilar values, and decided faster when confronted with more attractive (i.e., overall more valuable) pairs of options. We demonstrate that the suggested RLDDM can accommodate these effects and does so better than previously proposed models. To gain a better understanding of the model dynamics, we also compare it to standard DDMs and reinforcement learning models. Our work is a step forward towards bridging the gap between two traditions of decision-making research.


Varenicline rescues nicotine-induced decrease in motivation for sucrose reinforcement.

  • Erin Hart‎ et al.
  • Behavioural brain research‎
  • 2021‎

Varenicline is one of the top medications used for smoking cessation and is often prescribed before termination of nicotine use. The effect of this combined nicotine and varenicline use on the reward system and motivation for primary reinforcement is underexplored. The goal of this study was to assess the effects of nicotine and varenicline on motivation for a food reinforcer. In Experiment 1, we first assessed the responding for sucrose after pretreatment with nicotine (0, 0.1, or 0.4 mg/kg) and varenicline (0.0, 0.1, 1.0 mg/kg) using a behavioral economics approach. The responding for sucrose was then assessed using a progressive ratio schedule of reinforcement after pretreatment with all possible combinations of nicotine and varenicline doses. In Experiment 2, rats were assessed for the consumption of sucrose in home cages after pretreatment with nicotine and varenicline. We found that (a) nicotine decreased economic demand for sucrose, (b) varenicline rescued nicotine-induced reduction in economic demand for sucrose, and (c) history of varenicline treatment predicted responding for sucrose on a progressive ratio schedule of reinforcement where rats with a history of varenicline treatment responded significantly lower for sucrose across nicotine doses than rats that had not been exposed to varenicline. The results of Experiment 2 largely confirmed that nicotine decreases motivation for sucrose using a passive consumption protocol and that varenicline rescues this effect. Overall, these findings suggest that varenicline interacts with the effects of nicotine by restoring nicotine-induced reduction in motivation for appetitive rewards.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: