This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.
Research shows cognitive and neurobiological overlap between sign-tracking [value-modulated attentional capture (VMAC) by response-irrelevant, discrete cues] and maladaptive behaviour (e.g. substance abuse). We investigated the neural correlates of sign-tracking in 20 adults using an additional singleton task (AST) and functional magnetic resonance imaging (fMRI). Participants responded to a target to win monetary reward, the amount of which was signalled by singleton type (reward cue: high value vs. low value). Singleton responses resulted in monetary deductions. Sign-tracking-greater distraction by high-value vs. low-value singletons (H > L)-was observed, with high-value singletons producing slower responses to the target than low-value singletons. Controlling for age and sex, analyses revealed no differential brain activity across H > L singletons. Including sign-tracking as a regressor of interest revealed increased activity (H > L singletons) in cortico-subcortical loops, regions associated with Pavlovian conditioning, reward processing, attention shifts and relative value coding. Further analyses investigated responses to reward feedback (H > L). Controlling for age and sex, increased activity (H > L reward feedback) was found in regions associated with reward anticipation, attentional control, success monitoring and emotion regulation. Including sign-tracking as a regressor of interest revealed increased activity in the temporal pole, a region related to value discrimination. Results suggest sign-tracking is associated with activation of the 'attention and salience network' in response to reward cues but not reward feedback, suggesting parcellation between the two at the level of the brain. Results add to the literature showing considerable overlap in neural systems implicated in reward processing, learning, habit formation, emotion regulation and substance craving.
Neurons in the orbitofrontal cortex (OFC) fire in anticipation of and during rewards. Such firing has been suggested to encode reward predictions and to account in some way for the role of this area in adaptive behavior and learning. However, it has also been reported that neural activity in OFC reflects reward prediction errors, which might drive learning directly. Here we tested this question by analyzing the firing of OFC neurons recorded in an odor discrimination task in which rats were trained to sample odor cues and respond left or right on each trial for reward. Neurons were recorded across blocks of trials in which we switched either the number or the flavor of the reward delivered in each well. Previously we have described how neurons in this dataset fired to the predictive cues (Stalnaker et al., 2014); here we focused on the firing in anticipation of and just after delivery of each drop of reward, looking specifically for differences in firing based on whether the reward number or flavor was unexpected or expected. Unlike dopamine neurons recorded in this setting, which exhibited phasic error-like responses after surprising changes in either reward number or reward flavor (Takahashi et al., 2017), OFC neurons showed no such error correlates and instead fired in a way that reflected reward predictions.
Impulsive behavior during adolescence may stem from developmental imbalances between motivational and cognitive-control systems, producing greater urges to pursue reward and weakened capacities to inhibit such actions. Here, we developed a Pavlovian-instrumental transfer (PIT) protocol to assay rats' ability to suppress cue-motivated reward seeking based on changes in reward expectancy. Traditionally, PIT studies focus on how reward-predictive cues motivate instrumental reward-seeking behavior (lever pressing). However, cues signaling imminent reward delivery also elicit countervailing focal-search responses (food-port entry). We first examined how reward expectancy (cue-reward probability) influences expression of these competing behaviors. Adult male rats increased rates of lever pressing when presented with cues signaling lower probabilities of reward but focused their activity at the food cup on trials with cues that signaled higher probabilities of reward. We then compared adolescent and adult male rats in their responsivity to cues signaling different reward probabilities. In contrast to adults, adolescent rats did not flexibly adjust patterns of responding based on the expected likelihood of reward delivery but increased their rate of lever pressing for both weak and strong cues. These findings indicate that control over cue-motivated behavior is fundamentally dysregulated during adolescence, providing a model for studying neurobiological mechanisms of adolescent impulsivity.
An important symptom of major depressive disorder (MDD) is the inability to experience pleasure, possibly due to a dysfunction of the reward system. Despite promising insights regarding impaired reward-related processing in MDD, circuit-level abnormalities remain largely unexplored. Furthermore, whereas studies contrasting experimental conditions from incentive tasks have revealed important information about reward processing, temporal difference modeling of reward-related prediction error (PE) signals might give a more accurate representation of the reward system. We used a monetary incentive delay task during functional MRI scanning to explore PE-related striatal and ventral tegmental area (VTA) activation in response to anticipation and delivery of monetary rewards in 24 individuals with MDD versus 24 healthy controls (HCs). Furthermore, we investigated group differences in temporal difference related connectivity with a generalized psychophysiological interaction (gPPI) analysis with the VTA, ventral striatum (VS) and dorsal striatum (DS) as seeds during reward versus neutral, both in anticipation and delivery. Relative to HCs, MDD patients displayed a trend-level (p = 0.052) decrease in temporal difference-related activation in the VS during reward anticipation and delivery combined. Moreover, gPPI analyses revealed that during reward anticipation, MDD patients exhibited decreased functional connectivity between the VS and anterior cingulate cortex / medial prefrontal cortex, anterior cingulate gyrus, angular/middle orbital gyrus, left insula, superior/middle frontal gyrus (SFG/MFG) and precuneus/superior occipital gyrus/cerebellum compared to HC. Moreover, MDD patients showed decreased functional connectivity between the VTA and left insula compared to HC during reward anticipation. Exploratory analysis separating medication free patients from patients using antidepressant revealed that these decreased functional connectivity patterns were mainly apparent in the MDD group that used antidepressants. These results suggest that MDD is characterized by alterations in reward circuit connectivity rather than isolated activation impairments. These findings represent an important extension of the existing literature since improved understanding of neural pathways underlying depression-related reward dysfunctions, may help currently unmet diagnostic and therapeutic efforts.
Prediction about outcomes constitutes a basic mechanism underlying informed economic decision making. A stimulus constitutes a reward predictor when it provides more information about the reward than the environmental background. Reward prediction can be manipulated in two ways, by varying the reward paired with the stimulus, as done traditionally in neurophysiological studies, and by varying the background reward while holding stimulus-reward pairing constant. Neuronal mechanisms involved in reward prediction should also be sensitive to changes in background reward independently of stimulus-reward pairing. We tested this assumption on a major brain structure involved in reward processing, the central and basolateral amygdala. In a 2 x 2 design, we examined the influence of rewarded and unrewarded backgrounds on neuronal responses to rewarded and unrewarded visual stimuli. Indeed, responses to the unchanged rewarded stimulus depended crucially on background reward in a population of amygdala neurons. Elevating background reward to the level of the rewarded stimulus extinguished these responses, and lowering background reward again reinstated the responses without changes in stimulus-reward pairing. None of these neurons responded specifically to an inhibitory stimulus predicting less reward compared with background (negative contingency). A smaller group of amygdala neurons maintained stimulus responses irrespective of background reward, possibly reflecting stimulus-reward pairing or visual sensory processes without reward prediction. Thus in being sensitive to background reward, the responses of a population of amygdala neurons to phasic stimuli appeared to follow the full criteria for excitatory reward prediction (positive contingency) rather than reflecting simple stimulus-reward pairing (contiguity).
The neural circuitry underlying behavior in reward loss situations is poorly understood. We considered two such situations: reward devaluation (from large to small rewards) and reward omission (from large rewards to no rewards). There is evidence that the central nucleus of the amygdala (CeA) plays a role in the negative emotion accompanying reward loss. However, little is known about the function of the basolateral nucleus (BLA) in reward loss. Two hypotheses of BLA function in reward loss, negative emotion and reward comparisons, were tested in an experiment involving pretraining excitotoxic BLA lesions followed by training in four tasks: consummatory successive negative contrast (cSNC), autoshaping (AS) acquisition and extinction, anticipatory negative contrast (ANC), and open field testing (OF). Cell counts in the BLA (but not in the CeA) were significantly lower in animals with lesions vs. shams. BLA lesions eliminated cSNC and ANC, and accelerated extinction of lever pressing in AS. BLA lesions had no effect on OF testing: higher activity in the periphery than in the central area. This pattern of results provides support for the hypothesis that BLA neurons are important for reward comparison. The three affected tasks (cSNC, ANC, and AS extinction) involve reward comparisons. However, ANC does not seem to involve negative emotions and it was affected, whereas OF activity is known to involve negative emotion, but it was not affected. It is hypothesized that a circuit involving the thalamus, insular cortex, and BLA is critically involved in the mechanism comparing current and expected rewards.
Consumption of calorie-containing sugars elicits appetitive behavioral responses and dopamine release in the ventral striatum, even in the absence of sweet-taste transduction machinery. However, it is unclear if such reward-related postingestive effects reflect preabsorptive or postabsorptive events. In support of the importance of postabsorptive glucose detection, we found that, in rat behavioral tests, high concentration glucose solutions administered in the jugular vein were sufficient to condition a side-bias. Additionally, a lower concentration glucose solution conditioned robust behavioral responses when administered in the hepatic-portal, but not the jugular vein. Furthermore, enteric administration of glucose at a concentration that is sufficient to elicit behavioral conditioning resulted in a glycemic profile similar to that observed after administration of the low concentration glucose solution in the hepatic-portal, but not jugular vein. Finally using fast-scan cyclic voltammetry we found that, in accordance with behavioral findings, a low concentration glucose solution caused an increase in spontaneous dopamine release events in the nucleus accumbens shell when administered in the hepatic-portal, but not the jugular vein. These findings demonstrate that the postabsorptive effects of glucose are sufficient for the postingestive behavioral and dopaminergic reward-related responses that result from sugar consumption. Furthermore, glycemia levels in the hepatic-portal venous system contribute more significantly for this effect than systemic glycemia, arguing for the participation of an intra-abdominal visceral sensor for glucose.
Slot-machine gambling incorporates numerous audiovisual cues prior to and during reward delivery (e.g. spinning wheels, flashing lights, celebratory sounds). Over time, these cues may motivate playing and even elicit cravings and relapse in those affected by gambling disorder. Animal studies suggest a heightened attraction to these cues despite diminished predictive ability under reward uncertainty, as evidenced by sign-tracking behavior in rats. Repeated amphetamine administration may also enhance the incentive value attributed to cues. Here, we explored the impact of reward uncertainty and prior amphetamine sensitization on the relative attractiveness and conditioned reinforcing properties of serial Pavlovian cues with different degrees of predictive and incentive value in rats. Animals were sensitized through repeated injections of amphetamine (1-4 mg/kg) or saline and then trained in a Pavlovian autoshaping task involving two sequential lever-auditory cue combinations (CS1, CS2) under Certain (100%-1) or Uncertain (50%-1-2-3) reward conditions. Subsequently, we evaluated the impact of acute amphetamine exposure on cue attraction. Our results suggest that Uncertainty alone enhanced attraction towards the reward-proximal CS2. However, combined sensitization and Uncertainty reversed cue preference relative to Uncertainty alone, enhancing attraction towards the more predictive reward-distal CS1. Both cues acquired conditioned reinforcing properties, despite the CS2 being otherwise ignored in all groups besides Uncertainty. However, combined sensitization and Uncertainty increased the reinforcing value of both cues and doubled the amount of interaction with the CS1 lever per presentation. Our results imply competitive mechanisms for attributing incentive value to gambling-related cues between reward uncertainty, prior amphetamine sensitization, and acute amphetamine administration.
Altered reward processing has been proposed to contribute to the symptoms of attention deficit hyperactivity disorder (ADHD). The neurobiological mechanism underlying this alteration remains unclear. We hypothesize that the transfer of dopamine release from reward to reward-predicting cues, as normally observed in animal studies, may be deficient in ADHD. Functional magnetic resonance imaging (fMRI) was used to investigate striatal responses to reward-predicting cues and reward delivery in a classical conditioning paradigm. Data from 14 high-functioning and stimulant-naïve young adults with elevated lifetime symptoms of ADHD (8 males, 6 females) and 15 well-matched controls (8 males, 7 females) were included in the analyses. During reward anticipation, increased blood-oxygen-level-dependent (BOLD) responses in the right ventral and left dorsal striatum were observed in controls, but not in the ADHD group. The opposite pattern was observed in response to reward delivery; the ADHD group demonstrated significantly greater BOLD responses in the ventral striatum bilaterally and the left dorsal striatum relative to controls. In the ADHD group, the number of current hyperactivity/impulsivity symptoms was inversely related to ventral striatal responses during reward anticipation and positively associated with responses to reward. The BOLD response patterns observed in the striatum are consistent with impaired predictive dopamine signaling in ADHD, which may explain altered reward-contingent behaviors and symptoms of ADHD.
Puberty is a critical period for the initiation of drug use and abuse. Because early drug use onset often accounts for a more severe progression of addiction, it is of importance to understand the underlying mechanisms and neurodevelopmental changes during puberty that are contributing to enhanced reward processing in teenagers. The present study investigated the progression of reward sensitivity toward a natural food reward over the whole course of adolescence in male rats (postnatal days 30-90) by monitoring consummatory, motivational behavior and neurobiological correlates of reward. Using a limited-free intake paradigm, consumption of sweetened condensed milk (SCM) was measured repeatedly in adolescent and adult rats. Additionally, early- and mid-pubertal animals were tested in Progressive Ratio responding for SCM and c-fos protein expression in reward-associated brain structures was examined after odor conditioning for SCM. We found a transient increase in SCM consumption and motivational incentive for SCM during puberty. This increased reward sensitivity was most pronounced around mid-puberty. The behavioral findings are paralleled by enhanced c-fos staining in reward-related structures revealing an intensified neuronal response after reward-cue presentation, distinctive for pubertal animals. Taken together, these data indicate an increase in reward sensitivity during adolescence accompanied by enhanced responsiveness of reward-associated brain structures to incentive stimuli, and it seems that both is strongly pronounced around mid-puberty. Therefore, higher reward sensitivity during pubertal maturation might contribute to the enhanced vulnerability of teenagers for the initiation of experimental drug use.
Impulsivity and reward expectancy are commonly interrelated. Waiting impulsivity, measured using the rodent 5-Choice Serial Reaction Time task, predicts compulsive cocaine seeking and sign (or cue) tracking. Here, we assess human waiting impulsivity using a novel translational task, the 4-Choice Serial Reaction Time task, and the relationship with reward cues.
Both monetary and notional rewards are important to motivate individuals to prioritize specific items in visual working memory (VWM). However, whether the reward method and task difficulty are the key factors that modulate the reward boosts in VWM is unclear. In this study, we designed two experiments to explore this question. Experiment 1 examined whether the reward method modulates reward boosts in VWM by manipulating the item type (high reward, low reward, equal reward) and reward method (monetary and notional). Experiment 2 examined whether task difficulty modulates reward boosts in VWM by manipulating the number of high-reward items (1, 2, 3), reward method, and item type. The results indicated reward boosts for high-reward items compared to low- and equal-reward items. Moreover, the VWM performance was higher in the monetary reward condition than in the notional reward condition; however, there was no interaction between the reward method and item type. Additionally, a significant interaction was found between the reward number and item type: Reward boosts on VWM performance occurred only when one or two higher reward items were present. In conclusion, reward boosts in VWM tasks are modulated by task difficulty but not the reward method.
Efficient foraging requires an ability to coordinate discrete reward-seeking and reward-retrieval behaviors. We used pathway-specific chemogenetic inhibition to investigate how rats' mesolimbic and mesocortical dopamine circuits contribute to the expression and modulation of reward seeking and retrieval. Inhibiting ventral tegmental area dopamine neurons disrupted the tendency for reward-paired cues to motivate reward seeking, but spared their ability to increase attempts to retrieve reward. Similar effects were produced by inhibiting dopamine inputs to nucleus accumbens, but not medial prefrontal cortex. Inhibiting dopamine neurons spared the suppressive effect of reward devaluation on reward seeking, an assay of goal-directed behavior. Attempts to retrieve reward persisted after devaluation, indicating they were habitually performed as part of a fixed action sequence. Our findings show that complete bouts of reward seeking and retrieval are behaviorally and neurally dissociable from bouts of reward seeking without retrieval. This dichotomy may prove useful for uncovering mechanisms of maladaptive behavior.
Theoretical models of dopamine function stemming from reinforcement learning theory have emphasized the importance of prediction errors, which signal changes in the expectation of impending rewards. Much less is known about the effects of mean reward rates, which may be of motivational significance due to their role in computing the optimal effort put into exploiting reward opportunities. Here, we used a reinforcement learning model to design three functional neuroimaging studies and disentangle the effects of changes in reward expectations and mean reward rates, showing recruitment of specific regions in the brainstem regardless of prediction errors. While changes in reward expectations activated ventral striatal areas as in previous studies, mean reward rates preferentially modulated the substantia nigra/ventral tegmental area, deep layers of the superior colliculi, and a posterior pontomesencephalic region. These brainstem structures may work together to set motivation and attentional efforts levels according to perceived reward opportunities.
Trait extraversion has been theorized to emerge from functioning of the dopaminergic reward system. Recent evidence for this view shows that extraversion modulates the scalp-recorded Reward Positivity, a putative marker of dopaminergic signaling of reward-prediction-error. We attempt to replicate this association amid several improvements on previous studies in this area, including an adequately-powered sample (N = 100) and thorough examination of convergent-divergent validity. Participants completed a passive associative learning task presenting rewards and non-rewards that were either predictable or unexpected. Frequentist and Bayesian analyses confirmed that the scalp recorded Reward Positivity (i.e., the Feedback-Related-Negativity contrasting unpredicted rewards and unpredicted non-rewards) was significantly associated with three measures of extraversion and unrelated to other basic traits from the Big Five personality model. Narrower sub-traits of extraversion showed similar, though weaker associations with the Reward Positivity. These findings consolidate previous evidence linking extraversion with a putative marker of dopaminergic reward-processing.
Studies suggest an involvement of the ventromedial prefrontal cortex (vmPFC) in reward prediction and processing, with reward-based learning relying on neural activity in response to unpredicted rewards or non-rewards (reward prediction error, RPE). Here, we investigated the causal role of the vmPFC in reward prediction, processing, and RPE signaling by transiently modulating vmPFC excitability using transcranial Direct Current Stimulation (tDCS).
Meta-control is necessary to regulate the balance between cognitive stability and flexibility. Evidence from (voluntary) task switching studies suggests performance-contingent reward as one modulating factor. Depending on the immediate reward history, reward prospect seems to promote either cognitive stability or flexibility: Increasing reward prospect reduced switch costs and increased the voluntary switch rate, suggesting increased cognitive flexibility. In contrast, remaining high reward prospect increased switch costs and reduced the voluntary switch rate, suggesting increased cognitive stability. Recently we suggested that increasing reward prospect serves as a meta-control signal toward cognitive flexibility by lowering the updating threshold in working memory. However, in task switching paradigms with two tasks only, this could alternatively be explained by facilitated switching to the other of two tasks. To address this issue, a series of task switching experiments with uncued task switching between three univalent tasks was conducted. Results showed a reduction in reaction time (RT) switch costs to a nonsignificant difference and a high voluntary switch rate when reward prospect increased, whereas repetition RTs were faster, switch RTs slower, and voluntary switch rate was reduced when reward prospect remained high. That is, increasing reward prospect put participants in a state of equal readiness to respond to any target stimulus-be it a task repetition or a switch to one of the other two tasks. The study thus provides further evidence for the assumption that increasing reward prospect serves as a meta-control signal to increase cognitive flexibility, presumably by lowering the updating threshold in working memory.
Animal models of decision-making rely on an animal's motivation to decide and its ability to detect differences among various alternatives. Food reinforcement, although commonly used, is associated with problematic confounds, especially satiety. Here, we examined the use of brain stimulation reward (BSR) as an alternative reinforcer in rodent models of decision-making and compared it with the effectiveness of sugar pellets. The discriminability of various BSR frequencies was compared to differing numbers of sugar pellets in separate free-choice tasks. We found that BSR was more discriminable and motivated greater task engagement and more consistent preference for the larger reward. We then investigated whether rats prefer BSR of varying frequencies over sugar pellets. We found that animals showed either a clear preference for sugar reward or no preference between reward modalities, depending on the frequency of the BSR alternative and the size of the sugar reward. Overall, these results suggest that BSR is an effective reinforcer in rodent decision-making tasks, removing food-related confounds and resulting in more accurate, consistent, and reliable metrics of choice.
A food's reward value is dependent on its caloric content. Furthermore, a food's acute reward value also depends on hunger state. The drive to obtain rewards (reward sensitivity), however, differs between individuals. Here, we assessed the association between brain responses to calories in the mouth and trait reward sensitivity in different hunger states. Firstly, we assessed this in data from a functional neuroimaging study (van Rijn et al., 2015), in which participants (n = 30) tasted simple solutions of a non-caloric sweetener with or without a non-sweet carbohydrate (maltodextrin) during hunger and satiety. Secondly, we expanded these analyses to regular drinks by assessing the same relationship in data from a study in which soft drinks sweetened with either sucrose or a non-caloric sweetener were administered during hunger (n = 18) (Griffioen-Roose et al., 2013). First, taste activation by the non-caloric solution/soft drink was subtracted from that by the caloric solution/soft drink to eliminate sweetness effects and retain activation induced by calories. Subsequently, this difference in taste activation was correlated with reward sensitivity as measured with the BAS drive subscale of the Behavioral Activation System (BAS) questionnaire. When participants were hungry and tasted calories from the simple solution, brain activation in the right ventral striatum (caudate), right amygdala and anterior cingulate cortex (bilaterally) correlated negatively with BAS drive scores. In contrast, when participants were satiated, taste responses correlated positively with BAS drive scores in the left caudate. These results were not replicated for soft drinks. Thus, neural responses to oral calories from maltodextrin were modulated by reward sensitivity in reward-related brain areas. This was not the case for sucrose. This may be due to the direct detection of maltodextrin, but not sucrose in the oral cavity. Also, in a familiar beverage, detection of calories per se may be overruled by a conditioned response to its flavor. In conclusion, the brain reward response to calories from a long chain starch sugar (maltodextrin) varies with trait reward sensitivity. The absence of this effect in a familiar beverage warrants further research into its relevance for real life ingestive behavior.
Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.
You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.
If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.
Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:
You can save any searches you perform for quick access to later from here.
We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.
If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.
Here are the facets that you can filter your papers by.
From here we'll present any options for the literature, such as exporting your current results.
If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.
Year:
Count: