Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 1 showing 1 ~ 20 papers out of 2,284 papers

Cross-Validation Approaches for Replicability in Psychology.

  • Atesh Koul‎ et al.
  • Frontiers in psychology‎
  • 2018‎

No abstract available


Breast Mass Detection in Digital Mammogram Based on Gestalt Psychology.

  • Hongyu Wang‎ et al.
  • Journal of healthcare engineering‎
  • 2018‎

Inspired by gestalt psychology, we combine human cognitive characteristics with knowledge of radiologists in medical image analysis. In this paper, a novel framework is proposed to detect breast masses in digitized mammograms. It can be divided into three modules: sensation integration, semantic integration, and verification. After analyzing the progress of radiologist's mammography screening, a series of visual rules based on the morphological characteristics of breast masses are presented and quantified by mathematical methods. The framework can be seen as an effective trade-off between bottom-up sensation and top-down recognition methods. This is a new exploratory method for the automatic detection of lesions. The experiments are performed on Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammography (DDSM) data sets. The sensitivity reached to 92% at 1.94 false positive per image (FPI) on MIAS and 93.84% at 2.21 FPI on DDSM. Our framework has achieved a better performance compared with other algorithms.


The Use of Deep Learning and VR Technology in Film and Television Production From the Perspective of Audience Psychology.

  • Yangfan Tong‎ et al.
  • Frontiers in psychology‎
  • 2021‎

As the development of artificial intelligence (AI) technology, the deep-learning (DL)-based Virtual Reality (VR) technology, and DL technology are applied in human-computer interaction (HCI), and their impacts on modern film and TV works production and audience psychology are analyzed. In film and TV production, audiences have a higher demand for the verisimilitude and immersion of the works, especially in film production. Based on this, a 2D image recognition system for human body motions and a 3D recognition system for human body motions based on the convolutional neural network (CNN) algorithm of DL are proposed, and an analysis framework is established. The proposed systems are simulated on practical and professional datasets, respectively. The results show that the algorithm's computing performance in 2D image recognition is 7-9 times higher than that of the Open Pose method. It runs at 44.3 ms in 3D motion recognition, significantly lower than the Open Pose method's 794.5 and 138.7 ms. Although the detection accuracy has dropped by 2.4%, it is more efficient and convenient without limitations of scenarios in practical applications. The AI-based VR and DL enriches and expands the role and application of computer graphics in film and TV production using HCI technology theoretically and practically.


Recognition memory in developmental prosopagnosia: electrophysiological evidence for abnormal routes to face recognition.

  • Edwin J Burns‎ et al.
  • Frontiers in human neuroscience‎
  • 2014‎

DUAL PROCESS MODELS OF RECOGNITION MEMORY PROPOSE TWO DISTINCT ROUTES FOR RECOGNIZING A FACE: recollection and familiarity. Recollection is characterized by the remembering of some contextual detail from a previous encounter with a face whereas familiarity is the feeling of finding a face familiar without any contextual details. The Remember/Know (R/K) paradigm is thought to index the relative contributions of recollection and familiarity to recognition performance. Despite researchers measuring face recognition deficits in developmental prosopagnosia (DP) through a variety of methods, none have considered the distinct contributions of recollection and familiarity to recognition performance. The present study examined recognition memory for faces in eight individuals with DP and a group of controls using an R/K paradigm while recording electroencephalogram (EEG) data at the scalp. Those with DP were found to produce fewer correct "remember" responses and more false alarms than controls. EEG results showed that posterior "remember" old/new effects were delayed and restricted to the right posterior (RP) area in those with DP in comparison to the controls. A posterior "know" old/new effect commonly associated with familiarity for faces was only present in the controls whereas individuals with DP exhibited a frontal "know" old/new effect commonly associated with words, objects and pictures. These results suggest that individuals with DP do not utilize normal face-specific routes when making face recognition judgments but instead process faces using a pathway more commonly associated with objects.


Landscape Aesthetic Value of Waterfront Green Space Based on Space-Psychology-Behavior Dimension: A Case Study along Qiantang River (Hangzhou Section).

  • Xiaojia Liu‎ et al.
  • International journal of environmental research and public health‎
  • 2023‎

As an important part of urban green infrastructure, the landscape effect of the urban waterfront green space varies, and sometimes, the green space with an excellent landscape aesthetic value fails to serve the needs of most citizens. This seriously affects the construction of a green ecological civilization and the implementation of the concept of "common prosperity" in China. Based on multi-source data, this study took the Qiantang River Basin as an example, selected 12 representative waterfront green spaces along the river as the research objects, and used qualitative and quantitative analysis methods to determine the landscape aesthetic value of the research area from the different dimensions of space, psychology, and physiology. We examined the relationship between each dimension so as to objectively and comprehensively reflect the landscape value characteristics of the waterfront green space in the study area and provide a reasonable theoretical framework and practical development path for future urban waterfront green space landscape design. We obtained the following results: (1) The results of the spatial dimension research indicated that the spatial value index of the waterfront green space in the study area was three-dimensional space > vertical space > horizontal space, and the overall spatial value was low; Qianjiang Ecological Park obtained the highest value (0.5473), and Urban Balcony Park obtained the lowest value (0.4619). (2) The results of the psychological dimension indicated that people's perceptions of the waterfront green space in the study area were relatively weak, mainly focusing on visual perception, but the waterfront green space with a relative emotional value greater than one accounted for 75%, and the overall recognition of the landscape was high. (3) The results of the behavioral dimension showed that the overall heat of the waterfront green space in the study area was insufficient (1.3719-7.1583), which was mainly concentrated in low-heat levels, and the population density was unevenly distributed (0.0014-0.0663), which was mainly concentrated in the medium-density level. The main purpose of users was to visit, and they stayed an average of 1.5 h. (4) The results of the coupling coordination analysis of the spatial-psychological-behavioral dimensions showed that the landscape value of the waterfront green space in the study area presented a form of 'high coupling degree and low coordination degree'.


Hemispheric lateralization of linguistic prosody recognition in comparison to speech and speaker recognition.

  • Jens Kreitewolf‎ et al.
  • NeuroImage‎
  • 2014‎

Hemispheric specialization for linguistic prosody is a controversial issue. While it is commonly assumed that linguistic prosody and emotional prosody are preferentially processed in the right hemisphere, neuropsychological work directly comparing processes of linguistic prosody and emotional prosody suggests a predominant role of the left hemisphere for linguistic prosody processing. Here, we used two functional magnetic resonance imaging (fMRI) experiments to clarify the role of left and right hemispheres in the neural processing of linguistic prosody. In the first experiment, we sought to confirm previous findings showing that linguistic prosody processing compared to other speech-related processes predominantly involves the right hemisphere. Unlike previous studies, we controlled for stimulus influences by employing a prosody and speech task using the same speech material. The second experiment was designed to investigate whether a left-hemispheric involvement in linguistic prosody processing is specific to contrasts between linguistic prosody and emotional prosody or whether it also occurs when linguistic prosody is contrasted against other non-linguistic processes (i.e., speaker recognition). Prosody and speaker tasks were performed on the same stimulus material. In both experiments, linguistic prosody processing was associated with activity in temporal, frontal, parietal and cerebellar regions. Activation in temporo-frontal regions showed differential lateralization depending on whether the control task required recognition of speech or speaker: recognition of linguistic prosody predominantly involved right temporo-frontal areas when it was contrasted against speech recognition; when contrasted against speaker recognition, recognition of linguistic prosody predominantly involved left temporo-frontal areas. The results show that linguistic prosody processing involves functions of both hemispheres and suggest that recognition of linguistic prosody is based on an inter-hemispheric mechanism which exploits both a right-hemispheric sensitivity to pitch information and a left-hemispheric dominance in speech processing.


Object recognition memory in zebrafish.

  • Zacnicte May‎ et al.
  • Behavioural brain research‎
  • 2016‎

The novel object recognition, or novel-object preference (NOP) test is employed to assess recognition memory in a variety of organisms. The subject is exposed to two identical objects, then after a delay, it is placed back in the original environment containing one of the original objects and a novel object. If the subject spends more time exploring one object, this can be interpreted as memory retention. To date, this test has not been fully explored in zebrafish (Danio rerio). Zebrafish possess recognition memory for simple 2- and 3-dimensional geometrical shapes, yet it is unknown if this translates to complex 3-dimensional objects. In this study we evaluated recognition memory in zebrafish using complex objects of different sizes. Contrary to rodents, zebrafish preferentially explored familiar over novel objects. Familiarity preference disappeared after delays of 5 mins. Leopard danios, another strain of D. rerio, also preferred the familiar object after a 1 min delay. Object preference could be re-established in zebra danios by administration of nicotine tartrate salt (50mg/L) prior to stimuli presentation, suggesting a memory-enhancing effect of nicotine. Additionally, exploration biases were present only when the objects were of intermediate size (2 × 5 cm). Our results demonstrate zebra and leopard danios have recognition memory, and that low nicotine doses can improve this memory type in zebra danios. However, exploration biases, from which memory is inferred, depend on object size. These findings suggest zebrafish ecology might influence object preference, as zebrafish neophobia could reflect natural anti-predatory behaviour.


Scale ambiguities in material recognition.

  • Jacob R Cheeseman‎ et al.
  • iScience‎
  • 2022‎

Many natural materials have complex, multi-scale structures. Consequently, the inferred identity of a surface can vary with the assumed spatial scale of the scene: a plowed field seen from afar can resemble corduroy seen up close. We investigated this 'material-scale ambiguity' using 87 photographs of diverse materials (e.g., water, sand, stone, metal, and wood). Across two experiments, separate groups of participants (N = 72 adults) provided judgements of the material category depicted in each image, either with or without manipulations of apparent distance (by verbal instructions, or adding objects of familiar size). Our results demonstrate that these manipulations can cause identical images to be assigned to completely different material categories, depending on the assumed scale. Under challenging conditions, therefore, the categorization of materials is susceptible to simple manipulations of apparent distance, revealing a striking example of top-down effects in the interpretation of image features.


First Impression Misleads Emotion Recognition.

  • Valentina Colonnello‎ et al.
  • Frontiers in psychology‎
  • 2019‎

Recognition of others' emotions is a key life ability that guides one's own choices and behavior, and it hinges on the recognition of others' facial cues. Independent studies indicate that facial appearance-based evaluations affect social behavior, but little is known about how facial appearance-based trustworthiness evaluations influence the recognition of specific emotions. We tested the hypothesis that first impressions based on facial appearance affect the recognition of basic emotions. A total of 150 participants completed a dynamic emotion recognition task. In a within-subjects design, the participants viewed videos of individuals with trustworthy-looking, neutral, or untrustworthy-looking faces gradually and continuously displaying basic emotions (happiness, anger, fear, and sadness). The participants' accuracy and speed in recognizing the emotions were measured. Untrustworthy-looking faces decreased participants' emotion recognition accuracy and speed, across emotion types. In addition, faces that elicited a positive inference of trustworthiness enhanced emotion recognition speed of fear and sadness, emotional expressions that signal another's distress and modulate prosocial behavior. These findings suggest that facial appearance-based inferences may interfere with the ability to accurately and rapidly recognize others' basic emotions.


The Discrimination Ratio derived from Novel Object Recognition tasks as a Measure of Recognition Memory Sensitivity, not Bias.

  • Magali H Sivakumaran‎ et al.
  • Scientific reports‎
  • 2018‎

Translational recognition memory research makes frequent use of the Novel Object Recognition (NOR) paradigm in which animals are simultaneously presented with one new and one old object. The preferential exploration of the new as compared to the old object produces a metric, the Discrimination Ratio (DR), assumed to represent recognition memory sensitivity. Human recognition memory studies typically assess performance using signal detection theory derived measures; sensitivity (d') and bias (c). How DR relates to d' and c and whether they measure the same underlying cognitive mechanism is, however, unknown. We investigated the correspondence between DR (eye-tracking-determined), d' and c in a sample of 37 humans. We used dwell times during a visual paired comparison task (analogous to the NOR) to determine DR, and a separate single item recognition task to derive estimates of response sensitivity and bias. DR was found to be significantly positively correlated to sensitivity but not bias. Our findings confirm that DR corresponds to d', the primary measure of recognition memory sensitivity in humans, and appears not to reflect bias. These findings are the first of their kind to suggest that animal researchers should be confident in interpreting the DR as an analogue of recognition memory sensitivity.


Do human screams permit individual recognition?

  • Jonathan W M Engelberg‎ et al.
  • PeerJ‎
  • 2019‎

The recognition of individuals through vocalizations is a highly adaptive ability in the social behavior of many species, including humans. However, the extent to which nonlinguistic vocalizations such as screams permit individual recognition in humans remains unclear. Using a same-different vocalizer discrimination task, we investigated participants' ability to correctly identify whether pairs of screams were produced by the same person or two different people, a critical prerequisite to individual recognition. Despite prior theory-based contentions that screams are not acoustically well-suited to conveying identity cues, listeners discriminated individuals at above-chance levels by their screams, including both acoustically modified and unmodified exemplars. We found that vocalizer gender explained some variation in participants' discrimination abilities and response times, but participant attributes (gender, experience, empathy) did not. Our findings are consistent with abundant evidence from nonhuman primates, suggesting that both human and nonhuman screams convey cues to caller identity, thus supporting the thesis of evolutionary continuity in at least some aspects of scream function across primate species.


Facial emotion recognition in adopted children.

  • Amy L Paine‎ et al.
  • European child & adolescent psychiatry‎
  • 2023‎

Children adopted from public care are more likely to experience emotional and behavioural problems. We investigated two aspects of emotion recognition that may be associated with these outcomes, including discrimination accuracy of emotions and response bias, in a mixed-method, multi-informant study of 4-to-8-year old children adopted from local authority care in the UK (N = 42). We compared adopted children's emotion recognition performance to that of a comparison group of children living with their birth families, who were matched by age, sex, and teacher-rated total difficulties on the Strengths and Difficulties Questionnaire (SDQ, N = 42). We also examined relationships between adopted children's emotion recognition skills and their pre-adoptive histories of early adversity (indexed by cumulative adverse childhood experiences), their parent- and teacher-rated emotional and behavioural problems, and their parents' coded warmth during a Five Minute Speech Sample. Adopted children showed significantly worse facial emotion discrimination accuracy of sad and angry faces than non-adopted children. Adopted children's discrimination accuracy of scared and neutral faces was negatively associated with parent-reported behavioural problems, and discrimination accuracy of angry and scared faces was associated with parent- and teacher-reported emotional problems. Contrary to expectations, children who experienced more recorded pre-adoptive early adversity were more accurate in identifying negative emotions. Warm adoptive parenting was associated with fewer behavioural problems, and a lower tendency for children to incorrectly identify faces as angry. Study limitations and implications for intervention strategies to support adopted children's emotion recognition and psychological adjustment are discussed.


Recognition memory for human motor learning.

  • Neeraj Kumar‎ et al.
  • Current biology : CB‎
  • 2021‎

Motor skill retention is typically measured by asking participants to reproduce previously learned movements from memory. The analog of this retention test (recall memory) in human verbal memory is known to underestimate how much learning is actually retained. Here we asked whether information about previously learned movements, which can no longer be reproduced, is also retained. Following visuomotor adaptation, we used tests of recall that involved reproduction of previously learned movements and tests of recognition in which participants were asked whether a candidate limb displacement, produced by a robot arm held by the subject, corresponded to a movement direction that was experienced during active training. The main finding was that 24 h after training, estimates of recognition memory were about twice as accurate as those of recall memory. Thus, there is information about previously learned movements that is not retrieved using recall testing but can be accessed in tests of recognition. We conducted additional tests to assess whether, 24 h after learning, recall for previously learned movements could be improved by presenting passive movements as retrieval cues. These tests were conducted immediately prior to recall testing and involved the passive playback of a small number of movements, which were spread across the workspace and included both adapted and baseline movements, without being marked as such. This technique restored recall memory for movements to levels close to those of recognition memory performance. Thus, somatic information may enable retrieval of otherwise inaccessible motor memories.


Novel object recognition in Octopus maya.

  • Fabian Vergara-Ovalle‎ et al.
  • Animal cognition‎
  • 2023‎

The Novel Object Recognition task (NOR) is widely used to study vertebrates' memory. It has been proposed as an adequate model for studying memory in different taxonomic groups, allowing similar and comparable results. Although in cephalopods, several research reports could indicate that they recognize objects in their environment, it has not been tested as an experimental paradigm that allows studying different memory phases. This study shows that two-month-old and older Octopus maya subjects can differentiate between a new object and a known one, but one-month-old subjects cannot. Furthermore, we observed that octopuses use vision and tactile exploration of new objects to achieve object recognition, while familiar objects only need to be explored visually. To our knowledge, this is the first time showing an invertebrate performing the NOR task similarly to how it is performed in vertebrates. These results establish a guide to studying object recognition memory in octopuses and the ontological development of that memory.


Attribute pair-based visual recognition and memory.

  • Masahiko Morita‎ et al.
  • PloS one‎
  • 2010‎

In the human visual system, different attributes of an object, such as shape, color, and motion, are processed separately in different areas of the brain. This raises a fundamental question of how are these attributes integrated to produce a unified perception and a specific response. This "binding problem" is computationally difficult because all attributes are assumed to be bound together to form a single object representation. However, there is no firm evidence to confirm that such representations exist for general objects.


Relationships between priming and subsequent recognition memory.

  • Kiyofumi Miyoshi‎ et al.
  • SpringerPlus‎
  • 2014‎

A discrepancy exists among previous studies regarding whether priming and subsequent recognition memory are positively or negatively correlated. We consider that the difference in recognition memory measures used in these studies accounts for the discrepancy. To examine this, we introduced three different recognition measures and reexamined the relationship between priming and subsequent recognition. Participants learned stimulus words in the first encoding block while performing an abstract/concrete decision task. In the second encoding block, a priming test was conducted, followed by a surprise recognition memory test. Results showed that the hit rate and hit rate (pHit)-false-alarm rate (pFA) positively correlated with priming. However, the difference between hit rates for the twice- and once-encoded stimuli, which can reflect the representations acquired at the second exposure in particular, did not significantly correlate with priming. These results suggest that priming and subsequent recognition relate positively because of the common representations acquired at the initial encoding. Furthermore, the present results are consistent with a previous study that failed to reproduce the negative correlation between priming and subsequent recognition.


Food-Induced Emotional Resonance Improves Emotion Recognition.

  • Elisa Pandolfi‎ et al.
  • PloS one‎
  • 2016‎

The effect of food substances on emotional states has been widely investigated, showing, for example, that eating chocolate is able to reduce negative mood. Here, for the first time, we have shown that the consumption of specific food substances is not only able to induce particular emotional states, but more importantly, to facilitate recognition of corresponding emotional facial expressions in others. Participants were asked to perform an emotion recognition task before and after eating either a piece of chocolate or a small amount of fish sauce-which we expected to induce happiness or disgust, respectively. Our results showed that being in a specific emotional state improves recognition of the corresponding emotional facial expression. Indeed, eating chocolate improved recognition of happy faces, while disgusted expressions were more readily recognized after eating fish sauce. In line with the embodied account of emotion understanding, we suggest that people are better at inferring the emotional state of others when their own emotional state resonates with the observed one.


Review on Emotion Recognition Based on Electroencephalography.

  • Haoran Liu‎ et al.
  • Frontiers in computational neuroscience‎
  • 2021‎

Emotions are closely related to human behavior, family, and society. Changes in emotions can cause differences in electroencephalography (EEG) signals, which show different emotional states and are not easy to disguise. EEG-based emotion recognition has been widely used in human-computer interaction, medical diagnosis, military, and other fields. In this paper, we describe the common steps of an emotion recognition algorithm based on EEG from data acquisition, preprocessing, feature extraction, feature selection to classifier. Then, we review the existing EEG-based emotional recognition methods, as well as assess their classification effect. This paper will help researchers quickly understand the basic theory of emotion recognition and provide references for the future development of EEG. Moreover, emotion is an important representation of safety psychology.


Emotions affect the recognition of hand gestures.

  • Carmelo M Vicario‎ et al.
  • Frontiers in human neuroscience‎
  • 2013‎

The body is closely tied to the processing of social and emotional information. The purpose of this study was to determine whether a relationship between emotions and social attitudes conveyed through gestures exists. Thus, we tested the effect of pro-social (i.e., happy face) and anti-social (i.e., angry face) emotional primes on the ability to detect socially relevant hand postures (i.e., pictures depicting an open/closed hand). In particular, participants were required to establish, as quickly as possible, if the test stimulus (i.e., a hand posture) was the same or different, compared to the reference stimulus (i.e., a hand posture) previously displayed in the computer screen. Results show that facial primes, displayed between the reference and the test stimuli, influence the recognition of hand postures, according to the social attitude implicitly related to the stimulus. We found that perception of pro-social (i.e., happy face) primes resulted in slower RTs in detecting the open hand posture as compared to the closed hand posture. Vice-versa, perception of the anti-social (i.e., angry face) prime resulted in slower RTs in detecting the closed hand posture compared to the open hand posture. These results suggest that the social attitude implicitly conveyed by the displayed stimuli might represent the conceptual link between emotions and gestures.


Oxytocin improves emotion recognition for older males.

  • Anna Campbell‎ et al.
  • Neurobiology of aging‎
  • 2014‎

Older adults (≥60 years) perform worse than young adults (18-30 years) when recognizing facial expressions of emotion. The hypothesized cause of these changes might be declines in neurotransmitters that could affect information processing within the brain. In the present study, we examined the neuropeptide oxytocin that functions to increase neurotransmission. Research suggests that oxytocin benefits the emotion recognition of less socially able individuals. Men tend to have lower levels of oxytocin and older men tend to have worse emotion recognition than older women; therefore, there is reason to think that older men will be particularly likely to benefit from oxytocin. We examined this idea using a double-blind design, testing 68 older and 68 young adults randomly allocated to receive oxytocin nasal spray (20 international units) or placebo. Forty-five minutes afterward they completed an emotion recognition task assessing labeling accuracy for angry, disgusted, fearful, happy, neutral, and sad faces. Older males receiving oxytocin showed improved emotion recognition relative to those taking placebo. No differences were found for older females or young adults. We hypothesize that oxytocin facilitates emotion recognition by improving neurotransmission in the group with the worst emotion recognition.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: