This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.
Factors influencing students' learning satisfaction may differ between face-to-face and non-face-to-face flipped learning. For non-face-to-face flipped learning, which was widely employed during the COVID-19 pandemic, it is necessary to examine the impacts on learning satisfaction, which may vary depending on professor-student interaction rather than individual competencies, such as SDL readiness. This descriptive study, conducted 2 March 2019 to 24 June 2020, included 89 s-year, flipped-learning nursing students (28 face-to-face, 61 non-face-to-face). Students completed questionnaires about learning satisfaction, SDL readiness, and professor-student interaction. The data, collected using e-surveys, were analyzed using descriptive statistics, t-test, ANOVA, Pearson's correlation, and multiple stepwise regression with IBM's SPSS Statistics 25.0 program. The total average score of learning satisfaction (38.19 ± 6.04) was positively correlated with SDL readiness (r = 0.56, p < 0.001) and professor-student interaction (r = 0.36, p = 0.001), although total learning satisfaction was significantly different between the face-to-face and the non-face-to-face groups (t = 5.28, p = 0.024). They were also significant influencing factors, along with face-to-face flipped learning, for total learning satisfaction (F = 18.00, p < 0.001, explanatory power = 36.7%), suggesting flipped learners in non-face-to-face contexts must increase engagement beyond professor-student interaction.
The occipital face area (OFA) and fusiform face area (FFA) are brain regions thought to be specialized for face perception. However, their intrinsic functional organization and status as cortical areas with well-defined boundaries remains unclear. Here we test these regions for "faciotopy", a particular hypothesis about their intrinsic functional organisation. A faciotopic area would contain a face-feature map on the cortical surface, where cortical patches represent face features and neighbouring patches represent features that are physically neighbouring in a face. The faciotopy hypothesis is motivated by the idea that face regions might develop from a retinotopic protomap and acquire their selectivity for face features through natural visual experience. Faces have a prototypical configuration of features, are usually perceived in a canonical upright orientation, and are frequently fixated in particular locations. To test the faciotopy hypothesis, we presented images of isolated face features at fixation to subjects during functional magnetic resonance imaging. The responses in V1 were best explained by low-level image properties of the stimuli. OFA, and to a lesser degree FFA, showed evidence for faciotopic organization. When a single patch of cortex was estimated for each face feature, the cortical distances between the feature patches reflected the physical distance between the features in a face. Faciotopy would be the first example, to our knowledge, of a cortical map reflecting the topology, not of a part of the organism itself (its retina in retinotopy, its body in somatotopy), but of an external object of particular perceptual significance.
Gaze and language are major pillars in multimodal communication. Gaze is a non-verbal mechanism that conveys crucial social signals in face-to-face conversation. However, compared to language, gaze has been less studied as a communication modality. The purpose of the present study is 2-fold: (i) to investigate gaze direction (i.e., aversion and face gaze) and its relation to speech in a face-to-face interaction; and (ii) to propose a computational model for multimodal communication, which predicts gaze direction using high-level speech features. Twenty-eight pairs of participants participated in data collection. The experimental setting was a mock job interview. The eye movements were recorded for both participants. The speech data were annotated by ISO 24617-2 Standard for Dialogue Act Annotation, as well as manual tags based on previous social gaze studies. A comparative analysis was conducted by Convolutional Neural Network (CNN) models that employed specific architectures, namely, VGGNet and ResNet. The results showed that the frequency and the duration of gaze differ significantly depending on the role of participant. Moreover, the ResNet models achieve higher than 70% accuracy in predicting gaze direction.
Face cells are neurons that respond more to faces than to non-face objects. They are found in clusters in the inferotemporal cortex, thought to process faces specifically, and, hence, studied using faces almost exclusively. Analyzing neural responses in and around macaque face patches to hundreds of objects, we found graded response profiles for non-face objects that predicted the degree of face selectivity and provided information on face-cell tuning beyond that from actual faces. This relationship between non-face and face responses was not predicted by color and simple shape properties but by information encoded in deep neural networks trained on general objects rather than face classification. These findings contradict the long-standing assumption that face versus non-face selectivity emerges from face-specific features and challenge the practice of focusing on only the most effective stimulus. They provide evidence instead that category-selective neurons are best understood by their tuning directions in a domain-general object space.
This study aimed to elucidate whether distinct early processes underlie the perception of our own face. Alternatively, self-face perception might rely on the same processes that realize the perception of highly familiar faces. To this end, we recorded EEG activity while participants performed a facial recognition task in which they had to discriminate between their own face, a friend's face, and an unknown face. We analyzed the event-related potentials (ERPs) to characterize the time course of neural processes involved in different stages of self-face recognition. Our results show that the N170 component was not sensitive to self-face. In contrast, the subsequent P200 component distinguished between self-face and the other faces. Finally, N250 amplitude increased as a function of face familiarity. Overall, our data suggest that self-face recognition neither emerges at the first stage of the encoding of facial information nor at a later stage when familiarity is processed. Rather, the distinctive processing of self-face arises at an intermediate stage (˜200 ms), as indicated by a lower P200 amplitude. This could be taken as an indicator that self-face recognition is facilitated by a reduced need for attentional resources. In sum, our results suggest that self-face is more than a highly familiar face.
We present the first analysis of face-to-face contact network data from Niakhar, Senegal. Participants in a cluster-randomized influenza vaccine trial were interviewed about their contact patterns when they reported symptoms during their weekly household surveillance visit. We employ a negative binomial model to estimate effects of covariates on contact degree. We estimate the mean contact degree for asymptomatic Niakhar residents to be 16.5 (95% C.I. 14.3, 18.7) in the morning and 14.8 in the afternoon (95% C.I. 12.7, 16.9). We estimate that symptomatic people make 10% fewer contacts than asymptomatic people (95% C.I. 5%, 16%; p = 0.006), and those aged 0-5 make 33% fewer contacts than adults (95% C.I. 29%, 37%; p < 0.001). By explicitly modelling the partial rounding pattern observed in our data, we make inference for both the underlying (true) distribution of contacts as well as for the reported distribution. We created an estimator for homophily by compound (household) membership and estimate that 48% of contacts by symptomatic people are made to their own compound members in the morning (95% CI, 45%, 52%) and 60% in the afternoon/evening (95% CI, 56%, 64%). We did not find a significant effect of symptom status on compound homophily. We compare our findings to those from other countries and make design recommendations for future surveys.
Face-to-face interactions are important for a variety of individual behaviors and outcomes. In recent years, a number of human sensor technologies have been proposed to incorporate direct observations in behavioral studies of face-to-face interactions. One of the most promising emerging technologies is the application of active Radio Frequency Identification (RFID) badges. They are increasingly applied in behavioral studies because of their low costs, straightforward applicability, and moderate ethical concerns. However, despite the attention that RFID badges have recently received, there is a lack of systematic tests on how valid RFID badges are in measuring face-to-face interactions. With two studies, we aim to fill this gap. Study 1 (N = 11) compares how data assessed with RFID badges correspond with video data of the same interactions (construct validity) and how this fit can be improved using straightforward data processing strategies. The analyses show that the RFID badges have a sensitivity of 50%, which can be enhanced to 65% when flickering signals with gaps of less than 75 s are interpolated. The specificity is relatively less affected by this interpolation process (before interpolation 97%, after interpolation 94.7%)-resulting in an improved accuracy of the measurement. In Study 2 (N = 73) we show that self-report data of social interactions correspond highly with data gathered with the RFID badges (criterion validity).
The precise role of the fusiform face area (FFA) in face processing remains controversial. In this study, we investigated to what degree FFA activation reflects additional functions beyond face perception. Seven volunteers underwent rapid event-related functional magnetic resonance imaging while they performed a face-encoding and a face-recognition task. During face encoding, activity in the FFA for individual faces predicted whether the individual face was subsequently remembered or forgotten. However, during face recognition, no difference in FFA activity between consciously remembered and forgotten faces was observed, but the activity of FFA differentiated if a face had been seen previously or not. This demonstrated a dissociation between overt recognition and unconscious discrimination of stimuli, suggesting that physiological processes of face recognition can take place, even if not all of its operations are made available to consciousness.
Faces transmit a wealth of social information. How this information is exchanged between face-processing centers and brain areas supporting social cognition remains largely unclear. Here we identify these routes using resting state functional magnetic resonance imaging in macaque monkeys. We find that face areas functionally connect to specific regions within frontal, temporal, and parietal cortices, as well as subcortical structures supporting emotive, mnemonic, and cognitive functions. This establishes the existence of an extended face-recognition system in the macaque. Furthermore, the face patch resting state networks and the default mode network in monkeys show a pattern of overlap akin to that between the social brain and the default mode network in humans: this overlap specifically includes the posterior superior temporal sulcus, medial parietal, and dorsomedial prefrontal cortex, areas supporting high-level social cognition in humans. Together, these results reveal the embedding of face areas into larger brain networks and suggest that the resting state networks of the face patch system offer a new, easily accessible venue into the functional organization of the social brain and into the evolution of possibly uniquely human social skills.
The primate brain contains distinct areas densely populated by face-selective neurons. One of these, face-patch ML, contains neurons selective for contrast relationships between face parts. Such contrast-relationships can serve as powerful heuristics for face detection. However, it is unknown whether neurons with such selectivity actually support face-detection behavior. Here, we devised a naturalistic face-detection task and combined it with fMRI-guided pharmacological inactivation of ML to test whether ML is of critical importance for real-world face detection. We found that inactivation of ML impairs face detection. The effect was anatomically specific, as inactivation of areas outside ML did not affect face detection, and it was categorically specific, as inactivation of ML impaired face detection while sparing body and object detection. These results establish that ML function is crucial for detection of faces in natural scenes, performing a critical first step on which other face processing operations can build.
In a conversation, recognising the speaker's social action (e.g., a request) early may help the potential following speakers understand the intended message quickly, and plan a timely response. Human language is multimodal, and several studies have demonstrated the contribution of the body to communication. However, comparatively few studies have investigated (non-emotional) conversational facial signals and very little is known about how they contribute to the communication of social actions. Therefore, we investigated how facial signals map onto the expressions of two fundamental social actions in conversations: asking questions and providing responses. We studied the distribution and timing of 12 facial signals across 6778 questions and 4553 responses, annotated holistically in a corpus of 34 dyadic face-to-face Dutch conversations. Moreover, we analysed facial signal clustering to find out whether there are specific combinations of facial signals within questions or responses. Results showed a high proportion of facial signals, with a qualitatively different distribution in questions versus responses. Additionally, clusters of facial signals were identified. Most facial signals occurred early in the utterance, and had earlier onsets in questions. Thus, facial signals may critically contribute to the communication of social actions in conversation by providing social action-specific visual information.
The Montgomery Asberg Depression Rating Scale (MADRS) is a frequently used observer-rated depression scale. In the present study, a telephonic rating was compared with a face-to-face rating in 66 primary care patients with minor or mild-major depression. The aim of the present study was to assess the validity of the administration by telephone. Additional objective was to study the validity of the first item, 'apparent sadness', the only item purely based on observation.
Parkinson's disease is associated with impaired ability to recognize emotional facial expressions. In addition to a visual processing disorder, a visual recognition disorder may be involved in these patients. Pareidolia is a type of complex visual illusion that permits the interpretation of a vague stimulus as something known to the observer. Parkinson's patients experience pareidolic illusions. N170 and N250 waveforms are two event-related potentials (ERPs) involved in emotional facial expression recognition.
Studies examining the neural correlates of face perception and recognition in humans have revealed multiple brain regions that appear to play a specialized role in face processing. These include an anterior portion of perirhinal cortex (PrC) that appears to be homologous to the face-selective 'anterior face patch' recently reported in non-human primates. Electrical stimulation studies in the macaque indicate that the anterior face patch is strongly connected with other face-selective patches of cortex, even in the absence of face stimuli. The intrinsic functional connectivity of face-selective PrC and other regions of the face-processing network in humans are currently less well understood. Here, we examined resting-state fMRI connectivity across five face-selective regions in the right hemisphere that were identified with separate functional localizer scans: the PrC, amygdala (Amg), superior temporal sulcus, fusiform face area (FFA), and occipital face area. A partial correlation technique, controlling for fluctuations in occipitotemporal cortex that were not face specific, revealed connectivity between the PrC and the FFA, as well as the Amg. When examining the 'unique' connectivity of PrC within this face processing network, we found that the connectivity between the PrC and the FFA as well as that between the PrC and the Amg persisted even after controlling for potential mediating effects of other face-selective regions. Lastly, we examined the behavioral relevance of PrC connectivity by examining inter-individual differences in resting-state fluctuations in relation to differences in behavioral performance for a forced-choice recognition memory task that involved judgments on upright and inverted faces. This analysis revealed a significant correlation between the increased accuracy for upright faces (i.e., the face inversion effect) and the strength of connectivity between the PrC and the FFA. Together, these findings point to a high degree of functional integration of face-selective aspects of PrC in the face processing network with notable behavioral relevance.
In response to the COVID-19 pandemic, many governments around the world now recommend, or require, that their citizens cover the lower half of their face in public. Consequently, many people now wear surgical face masks in public. We investigated whether surgical face masks affected the performance of human observers, and a state-of-the-art face recognition system, on tasks of perceptual face matching. Participants judged whether two simultaneously presented face photographs showed the same person or two different people. We superimposed images of surgical masks over the faces, creating three different mask conditions: control (no masks), mixed (one face wearing a mask), and masked (both faces wearing masks). We found that surgical face masks have a large detrimental effect on human face matching performance, and that the degree of impairment is the same regardless of whether one or both faces in each pair are masked. Surprisingly, this impairment is similar in size for both familiar and unfamiliar faces. When matching masked faces, human observers are biased to reject unfamiliar faces as "mismatches" and to accept familiar faces as "matches". Finally, the face recognition system showed very high classification accuracy for control and masked stimuli, even though it had not been trained to recognise masked faces. However, accuracy fell markedly when one face was masked and the other was not. Our findings demonstrate that surgical face masks impair the ability of humans, and naïve face recognition systems, to perform perceptual face matching tasks. Identification decisions for masked faces should be treated with caution.
Texting has become one of the most prevalent ways to interact socially, particularly among youth; however, the effects of text messaging on social brain functioning are unknown. Guided by the biobehavioral synchrony frame, this pre-registered study utilized hyperscanning EEG to evaluate interbrain synchrony during face-to-face versus texting interactions. Participants included 65 mother-adolescent dyads observed during face-to-face conversation compared to texting from different rooms. Results indicate that both face-to-face and texting communication elicit significant neural synchrony compared to surrogate data, demonstrating for the first time brain-to-brain synchrony during texting. Direct comparison between the two interactions highlighted 8 fronto-temporal interbrain links that were significantly stronger in the face-to-face interaction compared to texting. Our findings suggest that partners co-create a fronto-temporal network of inter-brain connections during live social exchanges. The degree of improvement in the partners' right-frontal-right-frontal connectivity from texting to the live social interaction correlated with greater behavioral synchrony, suggesting that this well-researched neural connection may be specific to face-to-face communication. Our findings suggest that while technology-based communication allows humans to synchronize from afar, face-to-face interactions remain the superior mode of communication for interpersonal connection. We conclude by discussing the potential benefits and drawbacks of the pervasive use of texting, particularly among youth.
In a highly dynamic visual environment the human brain needs to rapidly differentiate complex visual patterns, such as faces. Here, we defined the temporal frequency tuning of cortical face-sensitive areas for face discrimination. Six observers were tested with functional magnetic resonance imaging (fMRI) when the same or different faces were presented in blocks at 11 frequency rates (ranging from 1 to 12 Hz). We observed a larger fMRI response for different than same faces - the repetition suppression/adaptation effect - across all stimulation frequency rates. Most importantly, the magnitude of the repetition suppression effect showed a typical Gaussian-shaped tuning function, peaking on average at 6 Hz for all face-sensitive areas of the ventral occipito-temporal cortex, including the fusiform and occipital "face areas" (FFA and OFA), as well as the superior temporal sulcus. This effect was due both to a maximal response to different faces in a range of 3 to 6 Hz and to a sharp drop of the blood oxygen level dependent (BOLD) signal from 6 Hz onward when the same face was repeated during a block. These observations complement recent scalp EEG observations (Alonso-Prieto et al., 2013), indicating that the cortical face network can discriminate each individual face when these successive faces are presented every 160-170 ms. They also suggest that a relatively fast 6 Hz rate may be needed to isolate the contribution of high-level face perception processes during behavioral discrimination tasks. Finally, these findings carry important practical implications, allowing investigators to optimize the stimulation frequency rates for observing the largest repetition suppression effects to faces and other visual forms in the occipito-temporal cortex.
In functional magnetic resonance imaging studies, object selectivity is defined as a higher neural response to an object category than other object categories. Importantly, object selectivity is widely considered as a neural signature of a functionally-specialized area in processing its preferred object category in the human brain. However, the behavioral significance of the object selectivity remains unclear. In the present study, we used the individual differences approach to correlate participants' face selectivity in the face-selective regions with their behavioral performance in face recognition measured outside the scanner in a large sample of healthy adults. Face selectivity was defined as the z score of activation with the contrast of faces vs. non-face objects, and the face recognition ability was indexed as the normalized residual of the accuracy in recognizing previously-learned faces after regressing out that for non-face objects in an old/new memory task. We found that the participants with higher face selectivity in the fusiform face area (FFA) and the occipital face area (OFA), but not in the posterior part of the superior temporal sulcus (pSTS), possessed higher face recognition ability. Importantly, the association of face selectivity in the FFA and face recognition ability cannot be accounted for by FFA response to objects or behavioral performance in object recognition, suggesting that the association is domain-specific. Finally, the association is reliable, confirmed by the replication from another independent participant group. In sum, our finding provides empirical evidence on the validity of using object selectivity as a neural signature in defining object-selective regions in the human brain.
It has been shown that human faces are processed holistically (i.e. as indecomposable wholes, rather than by their component parts) and this holistic face processing is linked to brain activity in face-responsive brain regions. Although several brain regions outside of the face-responsive network are also sensitive to relational processing and perceptual grouping, whether these non-face-responsive regions contribute to holistic processing remains unclear. Here, we investigated holistic face processing in the composite face paradigm both within and outside of face-responsive brain regions. We recorded participants' brain activity using fMRI while they performed a composite face task. Behavioural results indicate that participants tend to judge the same top face halves as different when they are aligned with different bottom face halves but not when they are misaligned, demonstrating a composite face effect. Neuroimaging results revealed significant differences in responses to aligned and misaligned faces in the lateral occipital complex (LOC), and trends in the anterior part of the fusiform face area (FFA2) and transverse occipital sulcus (TOS), suggesting that these regions are sensitive to holistic versus part-based face processing. Furthermore, the retrosplenial cortex (RSC) and the parahippocampal place area (PPA) showed a pattern of neural activity consistent with a holistic representation of face identity, which also correlated with the strength of the behavioural composite face effect. These results suggest that neural activity in brain regions both within and outside of the face-responsive network contributes to the composite-face effect.
The spatial coordinate system in which a stimulus representation is embedded is known as its reference frame. Every visual representation has a reference frame [1], and the visual system uses a variety of reference frames to efficiently code visual information [e.g., 1-5]. The representation of faces in early stages of visual processing depends on retino-centered reference frames, but little is known about the reference frames that code the high-level representations used to make judgements about faces. Here, we focus on a rare and striking disorder of face perception-hemi-prosopometamorphopsia (hemi-PMO)-to investigate these reference frames. After a left splenium lesion, Patient A.D. perceives features on the right side of faces as if they had melted. The same features were distorted when faces were presented in either visual field, at different in-depth rotations, and at different picture-plane orientations including upside-down. A.D.'s results indicate faces are aligned to a view- and orientation-independent face template encoded in a face-centered reference frame, that these face-centered representations are present in both the left and right hemisphere, and that the representations of the left and right halves of a face are dissociable.
Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.
You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.
If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.
Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:
You can save any searches you perform for quick access to later from here.
We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.
If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.
Here are the facets that you can filter your papers by.
From here we'll present any options for the literature, such as exporting your current results.
If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.
Year:
Count: