Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 1 showing 1 ~ 20 papers out of 347 papers

Make Gestures to Learn: Reproducing Gestures Improves the Learning of Anatomical Knowledge More than Just Seeing Gestures.

  • Mélaine Cherdieu‎ et al.
  • Frontiers in psychology‎
  • 2017‎

Manual gestures can facilitate problem solving but also language or conceptual learning. Both seeing and making the gestures during learning seem to be beneficial. However, the stronger activation of the motor system in the second case should provide supplementary cues to consolidate and re-enact the mental traces created during learning. We tested this hypothesis in the context of anatomy learning by naïve adult participants. Anatomy is a challenging topic to learn and is of specific interest for research on embodied learning, as the learning content can be directly linked to learners' body. Two groups of participants were asked to look at a video lecture on the forearm anatomy. The video included a model making gestures related to the content of the lecture. Both groups see the gestures but only one also imitate the model. Tests of knowledge were run just after learning and few days later. The results revealed that imitating gestures improves the recall of structures names and their localization on a diagram. This effect was however significant only in long-term assessments. This suggests that: (1) the integration of motor actions and knowledge may require sleep; (2) a specific activation of the motor system during learning may improve the consolidation and/or the retrieval of memories.


Emotions affect the recognition of hand gestures.

  • Carmelo M Vicario‎ et al.
  • Frontiers in human neuroscience‎
  • 2013‎

The body is closely tied to the processing of social and emotional information. The purpose of this study was to determine whether a relationship between emotions and social attitudes conveyed through gestures exists. Thus, we tested the effect of pro-social (i.e., happy face) and anti-social (i.e., angry face) emotional primes on the ability to detect socially relevant hand postures (i.e., pictures depicting an open/closed hand). In particular, participants were required to establish, as quickly as possible, if the test stimulus (i.e., a hand posture) was the same or different, compared to the reference stimulus (i.e., a hand posture) previously displayed in the computer screen. Results show that facial primes, displayed between the reference and the test stimuli, influence the recognition of hand postures, according to the social attitude implicitly related to the stimulus. We found that perception of pro-social (i.e., happy face) primes resulted in slower RTs in detecting the open hand posture as compared to the closed hand posture. Vice-versa, perception of the anti-social (i.e., angry face) prime resulted in slower RTs in detecting the closed hand posture compared to the open hand posture. These results suggest that the social attitude implicitly conveyed by the displayed stimuli might represent the conceptual link between emotions and gestures.


Gestures orchestrate brain networks for language understanding.

  • Jeremy I Skipper‎ et al.
  • Current biology : CB‎
  • 2009‎

Although the linguistic structure of speech provides valuable communicative information, nonverbal behaviors can offer additional, often disambiguating cues. In particular, being able to see the face and hand movements of a speaker facilitates language comprehension [1]. But how does the brain derive meaningful information from these movements? Mouth movements provide information about phonological aspects of speech [2-3]. In contrast, cospeech gestures display semantic information relevant to the intended message [4-6]. We show that when language comprehension is accompanied by observable face movements, there is strong functional connectivity between areas of cortex involved in motor planning and production and posterior areas thought to mediate phonological aspects of speech perception. In contrast, language comprehension accompanied by cospeech gestures is associated with tuning of and strong functional connectivity between motor planning and production areas and anterior areas thought to mediate semantic aspects of language comprehension. These areas are not tuned to hand and arm movements that are not meaningful. Results suggest that when gestures accompany speech, the motor system works with language comprehension areas to determine the meaning of those gestures. Results also suggest that the cortical networks underlying language comprehension, rather than being fixed, are dynamically organized by the type of contextual information available to listeners during face-to-face communication.


Iconic Gestures for Robot Avatars, Recognition and Integration with Speech.

  • Paul Bremner‎ et al.
  • Frontiers in psychology‎
  • 2016‎

Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances.


Bonobo and chimpanzee gestures overlap extensively in meaning.

  • Kirsty E Graham‎ et al.
  • PLoS biology‎
  • 2018‎

Cross-species comparison of great ape gesturing has so far been limited to the physical form of gestures in the repertoire, without questioning whether gestures share the same meanings. Researchers have recently catalogued the meanings of chimpanzee gestures, but little is known about the gesture meanings of our other closest living relative, the bonobo. The bonobo gestural repertoire overlaps by approximately 90% with that of the chimpanzee, but such overlap might not extend to meanings. Here, we first determine the meanings of bonobo gestures by analysing the outcomes of gesturing that apparently satisfy the signaller. Around half of bonobo gestures have a single meaning, while half are more ambiguous. Moreover, all but 1 gesture type have distinct meanings, achieving a different distribution of intended meanings to the average distribution for all gesture types. We then employ a randomisation procedure in a novel way to test the likelihood that the observed between-species overlap in the assignment of meanings to gestures would arise by chance under a set of different constraints. We compare a matrix of the meanings of bonobo gestures with a matrix for those of chimpanzees against 10,000 randomised iterations of matrices constrained to the original data at 4 different levels. We find that the similarity between the 2 species is much greater than would be expected by chance. Bonobos and chimpanzees share not only the physical form of the gestures but also many gesture meanings.


Examining the value of body gestures in social reward contexts.

  • Elin H Williams‎ et al.
  • NeuroImage‎
  • 2020‎

Brain regions associated with the processing of tangible rewards (such as money, food, or sex) are also involved in anticipating social rewards and avoiding social punishment. To date, studies investigating the neural underpinnings of social reward have presented feedback via static or dynamic displays of faces to participants. However, research demonstrates that participants find another type of social stimulus, namely, biological motion, rewarding as well, and exert effort to engage with this type of stimulus. Here we examine whether feedback presented via body gestures in the absence of facial cues also acts as a rewarding stimulus and recruits reward-related brain regions. To achieve this, we investigated the neural underpinnings of anticipating social reward and avoiding social disapproval presented via gestures alone, using a social incentive delay task. As predicted, the anticipation of social reward and avoidance of social disapproval engaged reward-related brain regions, including the nucleus accumbens, in a manner similar to previous studies' reports of feedback presented via faces and money. This study provides the first evidence that human body motion alone engages brain regions associated with reward processing in a similar manner to other social (i.e. faces) and non-social (i.e. money) rewards. The findings advance our understanding of social motivation in human perception and behavior.


Neural systems of visual attention responding to emotional gestures.

  • Tobias Flaisch‎ et al.
  • NeuroImage‎
  • 2009‎

Humans are the only species known to use symbolic gestures for communication. This affords a unique medium for nonverbal emotional communication with a distinct theoretical status compared to facial expressions and other biologically evolved nonverbal emotion signals. While a frown is a frown all around the world, the relation of emotional gestures to their referents is arbitrary and varies from culture to culture. The present studies examined whether such culturally based emotion displays guide visual attention processes. In two experiments, participants passively viewed symbolic hand gestures with positive, negative and neutral emotional meaning. In Experiment 1, functional magnetic resonance imaging (fMRI) measurements showed that gestures of insult and approval enhance activity in selected bilateral visual-associative brain regions devoted to object perception. In Experiment 2, dense sensor event-related brain potential recordings (ERP) revealed that emotional hand gestures are differentially processed already 150 ms poststimulus. Thus, the present studies provide converging neuroscientific evidence that emotional gestures provoke the cardinal signatures of selective visual attention regarding brain structures and temporal dynamics previously shown for emotional face and body expressions. It is concluded that emotionally charged gestures are efficient in shaping selective attention processes already at the level of stimulus perception.


Dynamic emotional expressions do not modulate responses to gestures.

  • Harry Farmer‎ et al.
  • Acta psychologica‎
  • 2021‎

The tendency to imitate the actions of others appears to be a fundamental aspect of human social interaction. Emotional expressions are a particularly salient form of social stimuli (Vuilleumier & Schwartz, 2001) but their relationship to imitative behaviour is currently unclear. In this paper we report the results of five studies which investigated the effect of a target's dynamic emotional stimuli on participants' tendency to respond compatibly to the target's actions. Experiment one examined the effect of dynamic emotional expressions on the automatic imitation of opening and closing hand movements. Experiment two used the same basic paradigm but added gaze direction as an additional factor. Experiment three investigated the effect of dynamic emotional expressions on compatibility responses to handshakes. Experiment four investigated whether dynamic emotional expressions modulated response to valenced social gestures. Finally, experiment five compared the effects of dynamic and static emotional expressions on participants' automatic imitation of finger lifting. Across all five studies we reliably elicited a compatibility effect however, none of the studies found a significant modulating effect of emotional expression. This null effect was also supported by a random effects meta-analysis and a series of Bayesian t-tests. Nevertheless, these results must be caveated by the fact that our studies had limited power to detect effect sizes below d = 0.4. We conclude by situating our findings within the literature, suggesting that the effect of emotional expressions on automatic imitation is, at best, minimal.


Simultaneous sEMG Classification of Hand/Wrist Gestures and Forces.

  • Francesca Leone‎ et al.
  • Frontiers in neurorobotics‎
  • 2019‎

Surface electromyography (sEMG) signals represent a promising approach for decoding the motor intention of amputees to control a multifunctional prosthetic hand in a non-invasive way. Several approaches based on proportional amplitude methods or simple thresholds on sEMG signals have been proposed to control a single degree of freedom at time, without the possibility of increasing the number of controllable multiple DoFs in a natural manner. Myoelectric control based on PR techniques have been introduced to add multiple DoFs by keeping low the number of electrodes and allowing the discrimination of different muscular patterns for each class of motion. However, the use of PR algorithms to simultaneously decode both gestures and forces has never been studied deeply. This paper introduces a hierarchical classification approach with the aim to assess the desired hand/wrist gestures, as well as the desired force levels to exert during grasping tasks. A Finite State Machine was introduced to manage and coordinate three classifiers based on the Non-Linear Logistic Regression algorithm. The classification architecture was evaluated across 31 healthy subjects. The "hand/wrist gestures classifier," introduced for the discrimination of seven hand/wrist gestures, presented a mean classification accuracy of 98.78%, while the "Spherical and Tip force classifier," created for the identification of three force levels, reached an average accuracy of 98.80 and 96.09%, respectively. These results were confirmed by Linear Discriminant Analysis (LDA) with time domain features extraction, considered as ground truth for the final validation of the performed analysis. A Wilcoxon Signed-Rank test was carried out for the statistical analysis of comparison between NLR and LDA and statistical significance was considered at p < 0.05. The comparative analysis reports not statistically significant differences in terms of F1Score performance between NLR and LDA. Thus, this study reveals that the use of non-linear classification algorithm, as NLR, is as much suitable as the benchmark LDA classifier for implementing an EMG pattern recognition system, able both to decode hand/wrist gestures and to associate different performed force levels to grasping actions.


Speech-associated gestures, Broca's area, and the human mirror system.

  • Jeremy I Skipper‎ et al.
  • Brain and language‎
  • 2007‎

Speech-associated gestures are hand and arm movements that not only convey semantic information to listeners but are themselves actions. Broca's area has been assumed to play an important role both in semantic retrieval or selection (as part of a language comprehension system) and in action recognition (as part of a "mirror" or "observation-execution matching" system). We asked whether the role that Broca's area plays in processing speech-associated gestures is consistent with the semantic retrieval/selection account (predicting relatively weak interactions between Broca's area and other cortical areas because the meaningful information that speech-associated gestures convey reduces semantic ambiguity and thus reduces the need for semantic retrieval/selection) or the action recognition account (predicting strong interactions between Broca's area and other cortical areas because speech-associated gestures are goal-direct actions that are "mirrored"). We compared the functional connectivity of Broca's area with other cortical areas when participants listened to stories while watching meaningful speech-associated gestures, speech-irrelevant self-grooming hand movements, or no hand movements. A network analysis of neuroimaging data showed that interactions involving Broca's area and other cortical areas were weakest when spoken language was accompanied by meaningful speech-associated gestures, and strongest when spoken language was accompanied by self-grooming hand movements or by no hand movements at all. Results are discussed with respect to the role that the human mirror system plays in processing speech-associated movements.


Beating time: How ensemble musicians' cueing gestures communicate beat position and tempo.

  • Laura Bishop‎ et al.
  • Psychology of music‎
  • 2018‎

Ensemble musicians typically exchange visual cues to coordinate piece entrances. "Cueing-in" gestures indicate when to begin playing and at what tempo. This study investigated how timing information is encoded in musicians' cueing-in gestures. Gesture acceleration patterns were expected to indicate beat position, while gesture periodicity, duration, and peak gesture velocity were expected to indicate tempo. Same-instrument ensembles (e.g., piano-piano) were expected to synchronize more successfully than mixed-instrument ensembles (e.g., piano-violin). Duos performed short passages as their head and (for violinists) bowing hand movements were tracked with accelerometers and Kinect sensors. Performers alternated between leader/follower roles; leaders heard a tempo via headphones and cued their partner in nonverbally. Violin duos synchronized more successfully than either piano duos or piano-violin duos, possibly because violinists were more experienced in ensemble playing than pianists. Peak acceleration indicated beat position in leaders' head-nodding gestures. Gesture duration and periodicity in leaders' head and bowing hand gestures indicated tempo. The results show that the spatio-temporal characteristics of cueing-in gestures guide beat perception, enabling synchronization with visual gestures that follow a range of spatial trajectories.


Electromyogram-Based Classification of Hand and Finger Gestures Using Artificial Neural Networks.

  • Kyung Hyun Lee‎ et al.
  • Sensors (Basel, Switzerland)‎
  • 2021‎

Electromyogram (EMG) signals have been increasingly used for hand and finger gesture recognition. However, most studies have focused on the wrist and whole-hand gestures and not on individual finger (IF) gestures, which are considered more challenging. In this study, we develop EMG-based hand/finger gesture classifiers based on fixed electrode placement using machine learning methods. Ten healthy subjects performed ten hand/finger gestures, including seven IF gestures. EMG signals were measured from three channels, and six time-domain (TD) features were extracted from each channel. A total of 18 features was used to build personalized classifiers for ten gestures with an artificial neural network (ANN), a support vector machine (SVM), a random forest (RF), and a logistic regression (LR). The ANN, SVM, RF, and LR achieved mean accuracies of 0.940, 0.876, 0.831, and 0.539, respectively. One-way analyses of variance and F-tests showed that the ANN achieved the highest mean accuracy and the lowest inter-subject variance in the accuracy, respectively, suggesting that it was the least affected by individual variability in EMG signals. Using only TD features, we achieved a higher ratio of gestures to channels than other similar studies, suggesting that the proposed method can improve the system usability and reduce the computational burden.


Comprehensibility and neural substrate of communicative gestures in severe aphasia.

  • Katharina Hogrefe‎ et al.
  • Brain and language‎
  • 2017‎

Communicative gestures can compensate incomprehensibility of oral speech in severe aphasia, but the brain damage that causes aphasia may also have an impact on the production of gestures. We compared the comprehensibility of gestural communication of persons with severe aphasia and non-aphasic persons and used voxel based lesion symptom mapping (VLSM) to determine lesion sites that are responsible for poor gestural expression in aphasia. On group level, persons with aphasia conveyed more information via gestures than controls indicating a compensatory use of gestures in persons with severe aphasia. However, individual analysis showed a broad range of gestural comprehensibility. VLSM suggested that poor gestural expression was associated with lesions in anterior temporal and inferior frontal regions. We hypothesize that likely functional correlates of these localizations are selection of and flexible changes between communication channels as well as between different types of gestures and between features of actions and objects that are expressed by gestures.


Selection of suitable hand gestures for reliable myoelectric human computer interface.

  • Maria Claudia F Castro‎ et al.
  • Biomedical engineering online‎
  • 2015‎

Myoelectric controlled prosthetic hand requires machine based identification of hand gestures using surface electromyogram (sEMG) recorded from the forearm muscles. This study has observed that a sub-set of the hand gestures have to be selected for an accurate automated hand gesture recognition, and reports a method to select these gestures to maximize the sensitivity and specificity.


The First Random Observational Survey of Barrier Gestures against COVID-19.

  • Véronique Renault‎ et al.
  • International journal of environmental research and public health‎
  • 2021‎

In the context of COVID-19 in Belgium, face-to-face teaching activities were allowed in Belgian universities at the beginning of the 2020-2021 academic year. Nevertheless, several control measures were established to control COVID-19 transmission on the campuses. To ensure compliance with these measures, a random observational survey, based on five barrier gestures, was implemented at the University of Liege (greetings without contact, hand sanitisation, following a one-way traffic flow, wearing a mask and physical distancing). Each barrier gesture was weighted, based on experts' elicitation, and a scoring system was developed. The results were presented as a diagram (to identify the margin of improvement for each barrier gesture) and a risk management barometer. In total, 526 h of observations were performed. The study revealed that some possible improvements could be made in the management of facilities, in terms of room allocation, the functionality of hydro-alcoholic gel dispensers, floor markings and one-way traffic flow. Compliance with the barrier gestures reached an overall weighted score of 68.2 (between 0 and 100). Three barrier gestures presented a lower implementation rate and should be addressed: the use of hydro-alcoholic gel (particularly when exiting buildings), compliance with the traffic flow and the maintenance of a 1.5 m physical distance outside of the auditoriums. The methodology and tool developed in the present study can easily be applied to other settings. They were proven to be useful in managing COVID-19, as the barometer that was developed and the outcomes of this survey enabled an improved risk assessment on campuses, and identified the critical points to be addressed in any further public health communication or education messages.


Comprehension of Co-Speech Gestures in Aphasic Patients: An Eye Movement Study.

  • Noëmi Eggenberger‎ et al.
  • PloS one‎
  • 2016‎

Co-speech gestures are omnipresent and a crucial element of human interaction by facilitating language comprehension. However, it is unclear whether gestures also support language comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co-speech gestures on comprehension in terms of accuracy in a decision task.


Depth of Encoding Through Observed Gestures in Foreign Language Word Learning.

  • Manuela Macedonia‎ et al.
  • Frontiers in psychology‎
  • 2019‎

Word learning is basic to foreign language acquisition, however time consuming and not always successful. Empirical studies have shown that traditional (visual) word learning can be enhanced by gestures. The gesture benefit has been attributed to depth of encoding. Gestures can lead to depth of encoding because they trigger semantic processing and sensorimotor enrichment of the novel word. However, the neural underpinning of depth of encoding is still unclear. Here, we combined an fMRI and a behavioral study to investigate word encoding online. In the scanner, participants encoded 30 novel words of an artificial language created for experimental purposes and their translation into the subjects' native language. Participants encoded the words three times: visually, audiovisually, and by additionally observing semantically related gestures performed by an actress. Hemodynamic activity during word encoding revealed the recruitment of cortical areas involved in stimulus processing. In this study, depth of encoding can be spelt out in terms of sensorimotor brain networks that grow larger the more sensory modalities are linked to the novel word. Word retention outside the scanner documented a positive effect of gestures in a free recall test in the short term.


Gesture's body orientation modulates the N400 for visual sentences primed by gestures.

  • Yifei He‎ et al.
  • Human brain mapping‎
  • 2020‎

Body orientation of gesture entails social-communicative intention, and may thus influence how gestures are perceived and comprehended together with auditory speech during face-to-face communication. To date, despite the emergence of neuroscientific literature on the role of body orientation on hand action perception, limited studies have directly investigated the role of body orientation in the interaction between gesture and language. To address this research question, we carried out an electroencephalography (EEG) experiment presenting to participants (n = 21) videos of frontal and lateral communicative hand gestures of 5 s (e.g., raising a hand), followed by visually presented sentences that are either congruent or incongruent with the gesture (e.g., "the mountain is high/low…"). Participants underwent a semantic probe task, judging whether a target word is related or unrelated to the gesture-sentence event. EEG results suggest that, during the perception phase of handgestures, while both frontal and lateral gestures elicited a power decrease in both the alpha (8-12 Hz) and the beta (16-24 Hz) bands, lateral versus frontal gestures elicited reduced power decrease in the beta band, source-located to the medial prefrontal cortex. For sentence comprehension, at the critical word whose meaning is congruent/incongruent with the gesture prime, frontal gestures elicited an N400 effect for gesture-sentence incongruency. More importantly, this incongruency effect was significantly reduced for lateral gestures. These findings suggest that body orientation plays an important role in gesture perception, and that its inferred social-communicative intention may influence gesture-language interaction at semantic level.


The Gestures in 2-4-Year-Old Children With Autism Spectrum Disorder.

  • QianYing Ye‎ et al.
  • Frontiers in psychology‎
  • 2021‎

Deficits in gestures act as early signs of impairment in social interaction (SI) and communication in children with autism spectrum disorder (ASD). However, the pieces of literature on atypical gesture patterns in ASD children are contradictory. This investigation aimed to explore the atypical gesture pattern of ASD children from the dimensions of quantity, communicative function, and integration ability; and its relationship with social ability and adaptive behavior. We used a semi-structured interactive play to evaluate gestures of 33 ASD children (24-48 months old) and 24 typically developing (TD) children (12-36 months old). And we evaluated the social ability, adaptive behavior, and productive language of ASD and TD children by using the Adaptive Behavior Assessment System version II (ABAS-II) and Chinese Communication Development Inventory (CCDI). No matter the total score of CCDI was corrected or not, the relative frequency of total gestures, behavior regulation (BR) gestures, SI gestures, and joint attention (JA) gestures of ASD children were lower than that of TD children, as well as the proportion of JA gestures. However, there was no significant group difference in the proportion of BR and SI gestures. Before adjusting for the total score of CCDI, the relative frequency of gestures without vocalization/verbalization integration and vocalization/verbalization-integrated gestures in ASD children was lower than that in TD children. However, after matching the total score of CCDI, only the relative frequency of gestures without vocalization/verbalization integration was lower. Regardless of the fact that the total score of CCDI was corrected or not, the relative frequency and the proportion of eye-gaze-integrated gestures in ASD children were lower than that in TD children. And the proportion of gestures without eye-gaze integration in ASD children was higher than that in TD children. For ASD children, the social skills score in ABAS-II was positively correlated with the relative frequency of SI gesture and eye-gaze-integrated gestures; the total score of ABAS-II was positively correlated with the relative frequency of total gestures and eye-gaze-integrated gestures. In conclusion, ASD children produce fewer gestures and have deficits in JA gestures. The deficiency of integrating eye gaze and gesture is the core deficit of ASD children's gesture communication. Relatively, ASD children might be capable of integrating vocalization/verbalization into gestures. SI gestures and the ability to integrate gesture and eye gaze are related to social ability. The quantity of gestures and the ability to integrate gesture with eye gaze are related to adaptive behavior. Clinical Trial Registration: www.ClinicalTrials.gov, identifier ChiCTR1800019679.


Towards a great ape dictionary: Inexperienced humans understand common nonhuman ape gestures.

  • Kirsty E Graham‎ et al.
  • PLoS biology‎
  • 2023‎

In the comparative study of human and nonhuman communication, ape gesturing provided the first demonstrations of flexible, intentional communication outside human language. Rich repertoires of these gestures have been described in all ape species, bar one: us. Given that the majority of great ape gestural signals are shared, and their form appears biologically inherited, this creates a conundrum: Where did the ape gestures go in human communication? Here, we test human recognition and understanding of 10 of the most frequently used ape gestures. We crowdsourced data from 5,656 participants through an online game, which required them to select the meaning of chimpanzee and bonobo gestures in 20 videos. We show that humans may retain an understanding of ape gestural communication (either directly inherited or part of more general cognition), across gesture types and gesture meanings, with information on communicative context providing only a marginal improvement in success. By assessing comprehension, rather than production, we accessed part of the great ape gestural repertoire for the first time in adult humans. Cognitive access to an ancestral system of gesture appears to have been retained after our divergence from other apes, drawing deep evolutionary continuity between their communication and our own.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: