Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 1 showing 1 ~ 20 papers out of 1,678 papers

Impaired holistic coding of facial expression and facial identity in congenital prosopagnosia.

  • Romina Palermo‎ et al.
  • Neuropsychologia‎
  • 2011‎

We test 12 individuals with congenital prosopagnosia (CP), who replicate a common pattern of showing severe difficulty in recognising facial identity in conjunction with normal recognition of facial expressions (both basic and 'social'). Strength of holistic processing was examined using standard expression composite and identity composite tasks. Compared to age- and sex-matched controls, group analyses demonstrated that CPs showed weaker holistic processing, for both expression and identity information. Implications are (a) normal expression recognition in CP can derive from compensatory strategies (e.g., over-reliance on non-holistic cues to expression); (b) the split between processing of expression and identity information may take place after a common stage of holistic processing; and (c) contrary to a recent claim, holistic processing of identity is functionally involved in face identification ability.


Facial expression modifies female body perception.

  • Farahnaz Noori‎ et al.
  • Perception‎
  • 2023‎

The judgment of female body appearance has been reported to be affected by a range of internal (e.g., viewers' sexual cognition) and external factors (e.g., viewed clothing type and colour). This eye-tracking study aimed to complement previous research by examining the effect of facial expression on female body perception and associated body-viewing gaze behaviour. We presented female body images of Caucasian avatars in a continuum of common dress sizes posing seven basic facial expressions (neutral, happiness, sadness, anger, fear, surprise, and disgust), and asked both male and female participants to rate the perceived body attractiveness and body size. The analysis revealed an evident modulatory role of avatar facial expressions on body attractiveness and body size ratings, but not on the amount of viewing time directed at individual body features. Specifically, happy and angry avatars attracted the highest and lowest body attractiveness ratings, respectively, and fearful and surprised avatars tended to be rated slimmer. Interestingly, the impact of facial expression on female body assessment was not further influenced by viewers' gender, suggesting a 'universal' role of common facial expressions in modifying the perception of female body appearance.


Assessing Automated Facial Action Unit Detection Systems for Analyzing Cross-Domain Facial Expression Databases.

  • Shushi Namba‎ et al.
  • Sensors (Basel, Switzerland)‎
  • 2021‎

In the field of affective computing, achieving accurate automatic detection of facial movements is an important issue, and great progress has already been made. However, a systematic evaluation of systems that now have access to the dynamic facial database remains an unmet need. This study compared the performance of three systems (FaceReader, OpenFace, AFARtoolbox) that detect each facial movement corresponding to an action unit (AU) derived from the Facial Action Coding System. All machines could detect the presence of AUs from the dynamic facial database at a level above chance. Moreover, OpenFace and AFAR provided higher area under the receiver operating characteristic curve values compared to FaceReader. In addition, several confusion biases of facial components (e.g., AU12 and AU14) were observed to be related to each automated AU detection system and the static mode was superior to dynamic mode for analyzing the posed facial database. These findings demonstrate the features of prediction patterns for each system and provide guidance for research on facial expressions.


The aftereffect of the ensemble average of facial expressions on subsequent facial expression recognition.

  • Kazusa Minemoto‎ et al.
  • Attention, perception & psychophysics‎
  • 2022‎

An ensemble or statistical summary can be extracted from facial expressions presented in different spatial locations simultaneously. However, how such complicated objects are represented in the mind is not clear. It is known that the aftereffect of facial expressions, in which prolonged viewing of facial expressions biases the perception of subsequent facial expressions of the same category, occurs only when a visual representation is formed. Using this methodology, we examined whether an ensemble can be represented with visualized information. Experiment 1 revealed that the presentation of multiple facial expressions biased the perception of subsequent facial expressions to less happy as much as the presentation of a single face did. Experiment 2 compared the presentation of faces comprising strong and weak intensities of emotional expressions with an individual face as the adaptation stimulus. The results indicated that the perceptual biases were found after the presentation of four faces and a strong single face, but not after the weak single face presentation. Experiment 3 employed angry expressions, a distinct category from the test expression used as an adaptation stimulus; no aftereffect was observed. Finally, Experiment 4 clearly demonstrated the perceptual bias with a higher number of faces. Altogether, these results indicate that an ensemble average extracted from multiple faces leads to the perceptual bias, and this effect is similar in terms of its properties to that of a single face. This supports the idea that an ensemble of faces is represented with visualized information as a single face.


Facial identity and facial expression are initially integrated at visual perceptual stages of face processing.

  • Katie Fisher‎ et al.
  • Neuropsychologia‎
  • 2016‎

It is frequently assumed that facial identity and facial expression are analysed in functionally and anatomically distinct streams within the core visual face processing system. To investigate whether expression and identity interact during the visual processing of faces, we employed a sequential matching procedure where participants compared either the identity or the expression of two successively presented faces, and ignored the other irrelevant dimension. Repetitions versus changes of facial identity and expression were varied independently across trials, and event-related potentials (ERPs) were recorded during task performance. Irrelevant facial identity and irrelevant expression both interfered with performance in the expression and identity matching tasks. These symmetrical interference effects show that neither identity nor expression can be selectively ignored during face matching, and suggest that they are not processed independently. N250r components to identity repetitions that reflect identity matching mechanisms in face-selective visual cortex were delayed and attenuated when there was an expression change, demonstrating that facial expression interferes with visual identity matching. These findings provide new evidence for interactions between facial identity and expression within the core visual processing system, and question the hypothesis that these two attributes are processed independently.


Catching a Liar Through Facial Expression of Fear.

  • Xunbing Shen‎ et al.
  • Frontiers in psychology‎
  • 2021‎

High stakes can be stressful whether one is telling the truth or lying. However, liars can feel extra fear from worrying to be discovered than truth-tellers, and according to the "leakage theory," the fear is almost impossible to be repressed. Therefore, we assumed that analyzing the facial expression of fear could reveal deceits. Detecting and analyzing the subtle leaked fear facial expressions is a challenging task for laypeople. It is, however, a relatively easy job for computer vision and machine learning. To test the hypothesis, we analyzed video clips from a game show "The moment of truth" by using OpenFace (for outputting the Action Units (AUs) of fear and face landmarks) and WEKA (for classifying the video clips in which the players were lying or telling the truth). The results showed that some algorithms achieved an accuracy of >80% merely using AUs of fear. Besides, the total duration of AU20 of fear was found to be shorter under the lying condition than that from the truth-telling condition. Further analysis found that the reason for a shorter duration in the lying condition was that the time window from peak to offset of AU20 under the lying condition was less than that under the truth-telling condition. The results also showed that facial movements around the eyes were more asymmetrical when people are telling lies. All the results suggested that facial clues can be used to detect deception, and fear could be a cue for distinguishing liars from truth-tellers.


Gene expression profile data for mouse facial development.

  • Sonia M Leach‎ et al.
  • Data in brief‎
  • 2017‎

This article contains data related to the research articles "Spatial and Temporal Analysis of Gene Expression during Growth and Fusion of the Mouse Facial Prominences" (Feng et al., 2009) [1] and "Systems Biology of facial development: contributions of ectoderm and mesenchyme" (Hooper et al., 2017 In press) [2]. Embryonic mammalian craniofacial development is a complex process involving the growth, morphogenesis, and fusion of distinct facial prominences into a functional whole. Aberrant gene regulation during this process can lead to severe craniofacial birth defects, including orofacial clefting. As a means to understand the genes involved in facial development, we had previously dissected the embryonic mouse face into distinct prominences: the mandibular, maxillary or nasal between E10.5 and E12.5. The prominences were then processed intact, or separated into ectoderm and mesenchyme layers, prior analysis of RNA expression using microarrays (Feng et al., 2009, Hooper et al., 2017 in press) [1], [2]. Here, individual gene expression profiles have been built from these datasets that illustrate the timing of gene expression in whole prominences or in the separated tissue layers. The data profiles are presented as an indexed and clickable list of the genes each linked to a graphical image of that gene׳s expression profile in the ectoderm, mesenchyme, or intact prominence. These data files will enable investigators to obtain a rapid assessment of the relative expression level of any gene on the array with respect to time, tissue, prominence, and expression trajectory.


Asymmetry in facial expression of emotions by chimpanzees.

  • Samuel Fernández-Carriba‎ et al.
  • Neuropsychologia‎
  • 2002‎

Asymmetries in human facial expressions have long been documented and traditionally interpreted as evidence of brain laterality in emotional behavior. Recent findings in nonhuman primates suggest that this hemispheric specialization for emotional behavior may have precursors in primate evolution. In this study, we present the first data collected on our closest living relative, the chimpanzee. Objective measures (hemimouth length and area) and subjective measures (human judgements of chimeric stimuli) indicate that chimpanzees' facial expressions are asymmetric, with a greater involvement of the left side of the face in the production of emotional responses. No effect of expression type (positive versus negative) on facial asymmetry was found. Thus, chimpanzees, like humans and some other nonhuman primates, show a right hemisphere specialization for facial expression of emotions.


Interference between conscious and unconscious facial expression information.

  • Xing Ye‎ et al.
  • PloS one‎
  • 2014‎

There is ample evidence to show that many types of visual information, including emotional information, could be processed in the absence of visual awareness. For example, it has been shown that masked subliminal facial expressions can induce priming and adaptation effects. However, stimulus made invisible in different ways could be processed to different extent and have differential effects. In this study, we adopted a flanker type behavioral method to investigate whether a flanker rendered invisible through Continuous Flash Suppression (CFS) could induce a congruency effect on the discrimination of a visible target. Specifically, during the experiment, participants judged the expression (either happy or fearful) of a visible face in the presence of a nearby invisible face (with happy or fearful expression). Results show that participants were slower and less accurate in discriminating the expression of the visible face when the expression of the invisible flanker face was incongruent. Thus, facial expression information rendered invisible with CFS and presented a different spatial location could enhance or interfere with consciously processed facial expression information.


Concurrent development of facial identity and expression discrimination.

  • Kirsten A Dalrymple‎ et al.
  • PloS one‎
  • 2017‎

Facial identity and facial expression processing both appear to follow a protracted developmental trajectory, yet these trajectories have been studied independently and have not been directly compared. Here we investigated whether these processes develop at the same or different rates using matched identity and expression discrimination tasks. The Identity task begins with a target face that is a morph between two identities (Identity A/Identity B). After a brief delay, the target face is replaced by two choice faces: 100% Identity A and 100% Identity B. Children 5-12-years-old were asked to pick the choice face that is most similar to the target identity. The Expression task is matched in format and difficulty to the Identity task, except the targets are morphs between two expressions (Angry/Happy, or Disgust/Surprise). The same children were asked to pick the choice face with the expression that is most similar to the target expression. There were significant effects of age, with performance improving (becoming more accurate and faster) on both tasks with increasing age. Accuracy and reaction times were not significantly different across tasks and there was no significant Age x Task interaction. Thus, facial identity and facial expression discrimination appear to develop at a similar rate, with comparable improvement on both tasks from age five to twelve. Because our tasks are so closely matched in format and difficulty, they may prove useful for testing face identity and face expression processing in special populations, such as autism or prosopagnosia, where one of these abilities might be impaired.


Facial Expression Recognition with LBP and ORB Features.

  • Ben Niu‎ et al.
  • Computational intelligence and neuroscience‎
  • 2021‎

Emotion plays an important role in communication. For human-computer interaction, facial expression recognition has become an indispensable part. Recently, deep neural networks (DNNs) are widely used in this field and they overcome the limitations of conventional approaches. However, application of DNNs is very limited due to excessive hardware specifications requirement. Considering low hardware specifications used in real-life conditions, to gain better results without DNNs, in this paper, we propose an algorithm with the combination of the oriented FAST and rotated BRIEF (ORB) features and Local Binary Patterns (LBP) features extracted from facial expression. First of all, every image is passed through face detection algorithm to extract more effective features. Second, in order to increase computational speed, the ORB and LBP features are extracted from the face region; specifically, region division is innovatively employed in the traditional ORB to avoid the concentration of the features. The features are invariant to scale and grayscale as well as rotation changes. Finally, the combined features are classified by Support Vector Machine (SVM). The proposed method is evaluated on several challenging databases such as Cohn-Kanade database (CK+), Japanese Female Facial Expressions database (JAFFE), and MMI database; experimental results of seven emotion state (neutral, joy, sadness, surprise, anger, fear, and disgust) show that the proposed framework is effective and accurate.


Facial Pain Expression Recognition in Real-Time Videos.

  • Pranti Dutta‎ et al.
  • Journal of healthcare engineering‎
  • 2018‎

Recognition of pain in patients who are incapable of expressing themselves allows for several possibilities of improved diagnosis and treatment. Despite the advancements that have already been made in this field, research is still lacking with respect to the detection of pain in live videos, especially under unfavourable conditions. To address this gap in existing research, the current study proposed a hybrid model that allowed for efficient pain recognition. The hybrid, which consisted of a combination of the Constrained Local Model (CLM), Active Appearance Model (AAM), and Patch-Based Model, was applied in conjunction with image algebra. This contributed to a system that enabled the successful detection of pain from a live stream, even with poor lighting and a low-resolution recording device. The final process and output allowed for memory for storage that was reduced up to 40%-55% and an improved processing time of 20%-25%. The experimental system met with success and was able to detect pain for the 22 analysed videos with an accuracy of 55.75%-100.00%. To increase the fidelity of the proposed technique, the hybrid model was tested on UNBC-McMaster Shoulder Pain Database as well.


Relationship between toll-like receptor expression in the distal facial nerve and facial nerve recovery after injury.

  • Hye Kyu Min‎ et al.
  • International journal of immunopathology and pharmacology‎
  • 2022‎

This study aimed to determine whether toll-like receptor expression patterns differ in the distal facial nerve during recovery after crushing and cutting injuries.


Computerised analysis of facial emotion expression in eating disorders.

  • Jenni Leppanen‎ et al.
  • PloS one‎
  • 2017‎

Problems with social-emotional processing are known to be an important contributor to the development and maintenance of eating disorders (EDs). Diminished facial communication of emotion has been frequently reported in individuals with anorexia nervosa (AN). Less is known about facial expressivity in bulimia nervosa (BN) and in people who have recovered from AN (RecAN). This study aimed to pilot the use of computerised facial expression analysis software to investigate emotion expression across the ED spectrum and recovery in a large sample of participants.


Psychophysical measures of sensitivity to facial expression of emotion.

  • Michelle Marneweck‎ et al.
  • Frontiers in psychology‎
  • 2013‎

We report the development of two simple, objective, psychophysical measures of the ability to discriminate facial expressions of emotion that vary in intensity from a neutral facial expression and to discriminate between varying intensities of emotional facial expression. The stimuli were created by morphing photographs of models expressing four basic emotions, anger, disgust, happiness, and sadness with neutral expressions. Psychometric functions were obtained for 15 healthy young adults using the Method of Constant Stimuli with a two-interval forced-choice procedure. Individual data points were fitted by Quick functions for each task and each emotion, allowing estimates of absolute thresholds and slopes. The tasks give objective and sensitive measures of the basic perceptual abilities required for perceiving and interpreting emotional facial expressions.


A Comparison of the Affectiva iMotions Facial Expression Analysis Software With EMG for Identifying Facial Expressions of Emotion.

  • Louisa Kulke‎ et al.
  • Frontiers in psychology‎
  • 2020‎

Human faces express emotions, informing others about their affective states. In order to measure expressions of emotion, facial Electromyography (EMG) has widely been used, requiring electrodes and technical equipment. More recently, emotion recognition software has been developed that detects emotions from video recordings of human faces. However, its validity and comparability to EMG measures is unclear. The aim of the current study was to compare the Affectiva Affdex emotion recognition software by iMotions with EMG measurements of the zygomaticus mayor and corrugator supercilii muscle, concerning its ability to identify happy, angry and neutral faces. Twenty participants imitated these facial expressions while videos and EMG were recorded. Happy and angry expressions were detected by both the software and by EMG above chance, while neutral expressions were more often falsely identified as negative by EMG compared to the software. Overall, EMG and software values correlated highly. In conclusion, Affectiva Affdex software can identify facial expressions and its results are comparable to EMG findings.


Hierarchical recognition scheme for human facial expression recognition systems.

  • Muhammad Hameed Siddiqi‎ et al.
  • Sensors (Basel, Switzerland)‎
  • 2013‎

Over the last decade, human facial expressions recognition (FER) has emerged as an important research area. Several factors make FER a challenging research problem. These include varying light conditions in training and test images; need for automatic and accurate face detection before feature extraction; and high similarity among different expressions that makes it difficult to distinguish these expressions with a high accuracy. This work implements a hierarchical linear discriminant analysis-based facial expressions recognition (HL-FER) system to tackle these problems. Unlike the previous systems, the HL-FER uses a pre-processing step to eliminate light effects, incorporates a new automatic face detection scheme, employs methods to extract both global and local features, and utilizes a HL-FER to overcome the problem of high similarity among different expressions. Unlike most of the previous works that were evaluated using a single dataset, the performance of the HL-FER is assessed using three publicly available datasets under three different experimental settings: n-fold cross validation based on subjects for each dataset separately; n-fold cross validation rule based on datasets; and, finally, a last set of experiments to assess the effectiveness of each module of the HL-FER separately. Weighted average recognition accuracy of 98.7% across three different datasets, using three classifiers, indicates the success of employing the HL-FER for human FER.


Correlations Between Psychological Status and Perception of Facial Expression.

  • Sujin Bae‎ et al.
  • Psychiatry investigation‎
  • 2022‎

Facial affect recognition is associated with neuropsychological status and psychiatric diseases. We hypothesized that facial affect recognition is associated with psychological status and perception of other affects.


Gamma-band activity reflects attentional guidance by facial expression.

  • Kathrin Müsch‎ et al.
  • NeuroImage‎
  • 2017‎

Facial expressions attract attention due to their motivational significance. Previous work focused on attentional biases towards threat-related, fearful faces, although healthy participants tend to avoid mild threat. Growing evidence suggests that neuronal gamma (>30Hz) and alpha-band activity (8-12Hz) play an important role in attentional selection, but it is unknown if such oscillatory activity is involved in the guidance of attention through facial expressions. Thus, in this magnetoencephalography (MEG) study we investigated whether attention is shifted towards or away from fearful faces and characterized the underlying neuronal activity in these frequency ranges in forty-four healthy volunteers. We employed a covert spatial attention task using neutral and fearful faces as task-irrelevant distractors and emotionally neutral Gabor patches as targets. Participants had to indicate the tilt direction of the target. Analysis of the neuronal data was restricted to the responses to target Gabor patches. We performed statistical analysis at the sensor level and used subsequent source reconstruction to localize the observed effects. Spatially selective attention effects in the alpha and gamma band were revealed in parieto-occipital regions. We observed an attentional cost of processing the face distractors, as reflected in lower task performance on targets with short stimulus onset asynchrony (SOA <150ms) between faces and targets. On the neuronal level, attentional orienting to face distractors led to enhanced gamma band activity in bilateral occipital and parietal regions, when fearful faces were presented in the same hemifield as targets, but only in short SOA trials. Our findings provide evidence that both top-down and bottom-up attentional biases are reflected in parieto-occipital gamma-band activity.


Neural Correlates of Facial Expression Recognition in Earthquake Witnesses.

  • Francesca Pistoia‎ et al.
  • Frontiers in neuroscience‎
  • 2019‎

Major adverse events, like an earthquake, trigger different kinds of emotional dysfunctions or psychiatric disorders in the exposed subjects. Recent literature has also shown that exposure to natural disasters can increase threat detection. In particular, we previously found a selective enhancement in the ability to read emotional facial expressions in L'Aquila earthquake witnesses, suggesting hypervigilance to stimuli signaling a threat. In light of previous neuroimaging data showing that trauma exposure is related to derangement of resting-state brain activity, in the present study we investigated the neurofunctional changes related to the recognition of emotional faces in L'Aquila earthquake witnesses. Specifically, we tested the relationships between accuracy in recognizing facial expressions and activity of the visual network (VN) and of the default-mode network (DMN). Resting-state functional connectivity (FC) with the main hub of the VN (primary, ventral, right-dorsal, and left-dorsal visual cortices) and DMN (posterior cingulate/precuneus, medial prefrontal, and right and left inferior parietal cortices) was investigated through a seed-based functional magnetic resonance imaging (fMRI) analysis in both earthquake-exposed subjects and non-exposed persons who did not live in an earthquake-affected area. The results showed that, in earthquake-exposed subjects, there is a significant reduction in the correlation between accuracy in recognizing facial expressions and the FC of the dorsal seed of the VN with the right inferior occipito-temporal cortex and the left lateral temporal cortex, and of two parietal seeds of DMN, i.e., lower parietal and medial prefrontal cortex, with the precuneus bilaterally. These findings suggest that a functional modification of brain systems involved in detecting and interpreting emotional faces may represent the neurophysiological basis of the specific "emotional expertise" observed in the earthquake witnesses.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: