Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 3 showing 41 ~ 60 papers out of 126 papers

Remote Actuation Systems for Fully Wearable Assistive Devices: Requirements, Selection, and Optimization for Out-of-the-Lab Application of a Hand Exoskeleton.

  • Jan Dittli‎ et al.
  • Frontiers in robotics and AI‎
  • 2020‎

Wearable robots assist individuals with sensorimotor impairment in daily life, or support industrial workers in physically demanding tasks. In such scenarios, low mass and compact design are crucial factors for device acceptance. Remote actuation systems (RAS) have emerged as a popular approach in wearable robots to reduce perceived weight and increase usability. Different RAS have been presented in the literature to accommodate for a wide range of applications and related design requirements. The push toward use of wearable robotics in out-of-the-lab applications in clinics, home environments, or industry created a shift in requirements for RAS. In this context, high durability, ergonomics, and simple maintenance gain in importance. However, these are only rarely considered and evaluated in research publications, despite being drivers for device abandonment by end-users. In this paper, we summarize existing approaches of RAS for wearable assistive technology in a literature review and compare advantages and disadvantages, focusing on specific evaluation criteria for out-of-the-lab applications to provide guidelines for the selection of RAS. Based on the gained insights, we present the development, optimization, and evaluation of a cable-based RAS for out-of-the-lab applications in a wearable assistive soft hand exoskeleton. The presented RAS features full wearability, high durability, high efficiency, and appealing design while fulfilling ergonomic criteria such as low mass and high wearing comfort. This work aims to support the transfer of RAS for wearable robotics from controlled lab environments to out-of-the-lab applications.


Soft Capsule Magnetic Millirobots for Region-Specific Drug Delivery in the Central Nervous System.

  • Lamar O Mair‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

Small soft robotic systems are being explored for myriad applications in medicine. Specifically, magnetically actuated microrobots capable of remote manipulation hold significant potential for the targeted delivery of therapeutics and biologicals. Much of previous efforts on microrobotics have been dedicated to locomotion in aqueous environments and hard surfaces. However, our human bodies are made of dense biological tissues, requiring researchers to develop new microrobotics that can locomote atop tissue surfaces. Tumbling microrobots are a sub-category of these devices capable of walking on surfaces guided by rotating magnetic fields. Using microrobots to deliver payloads to specific regions of sensitive tissues is a primary goal of medical microrobots. Central nervous system (CNS) tissues are a prime candidate given their delicate structure and highly region-specific function. Here we demonstrate surface walking of soft alginate capsules capable of moving on top of a rat cortex and mouse spinal cord ex vivo, demonstrating multi-location small molecule delivery to up to six different locations on each type of tissue with high spatial specificity. The softness of alginate gel prevents injuries that may arise from friction with CNS tissues during millirobot locomotion. Development of this technology may be useful in clinical and preclinical applications such as drug delivery, neural stimulation, and diagnostic imaging.


Reactive optimal motion planning for a class of holonomic planar agents using reinforcement learning with provable guarantees.

  • Panagiotis Rousseas‎ et al.
  • Frontiers in robotics and AI‎
  • 2023‎

In control theory, reactive methods have been widely celebrated owing to their success in providing robust, provably convergent solutions to control problems. Even though such methods have long been formulated for motion planning, optimality has largely been left untreated through reactive means, with the community focusing on discrete/graph-based solutions. Although the latter exhibit certain advantages (completeness, complicated state-spaces), the recent rise in Reinforcement Learning (RL), provides novel ways to address the limitations of reactive methods. The goal of this paper is to treat the reactive optimal motion planning problem through an RL framework. A policy iteration RL scheme is formulated in a consistent manner with the control-theoretic results, thus utilizing the advantages of each approach in a complementary way; RL is employed to construct the optimal input without necessitating the solution of a hard, non-linear partial differential equation. Conversely, safety, convergence and policy improvement are guaranteed through control theoretic arguments. The proposed method is validated in simulated synthetic workspaces, and compared against reactive methods as well as a PRM and an RRT⋆ approach. The proposed method outperforms or closely matches the latter methods, indicating the near global optimality of the former, while providing a solution for planning from anywhere within the workspace to the goal position.


Event-triggered robot self-assessment to aid in autonomy adjustment.

  • Nicholas Conlon‎ et al.
  • Frontiers in robotics and AI‎
  • 2023‎

Introduction: Human-robot teams are being called upon to accomplish increasingly complex tasks. During execution, the robot may operate at different levels of autonomy (LOAs), ranging from full robotic autonomy to full human control. For any number of reasons, such as changes in the robot's surroundings due to the complexities of operating in dynamic and uncertain environments, degradation and damage to the robot platform, or changes in tasking, adjusting the LOA during operations may be necessary to achieve desired mission outcomes. Thus, a critical challenge is understanding when and how the autonomy should be adjusted. Methods: We frame this problem with respect to the robot's capabilities and limitations, known as robot competency. With this framing, a robot could be granted a level of autonomy in line with its ability to operate with a high degree of competence. First, we propose a Model Quality Assessment metric, which indicates how (un)expected an autonomous robot's observations are compared to its model predictions. Next, we present an Event-Triggered Generalized Outcome Assessment (ET-GOA) algorithm that uses changes in the Model Quality Assessment above a threshold to selectively execute and report a high-level assessment of the robot's competency. We validated the Model Quality Assessment metric and the ET-GOA algorithm in both simulated and live robot navigation scenarios. Results: Our experiments found that the Model Quality Assessment was able to respond to unexpected observations. Additionally, our validation of the full ET-GOA algorithm explored how the computational cost and accuracy of the algorithm was impacted across several Model Quality triggering thresholds and with differing amounts of state perturbations. Discussion: Our experimental results combined with a human-in-the-loop demonstration show that Event-Triggered Generalized Outcome Assessment algorithm can facilitate informed autonomy-adjustment decisions based on a robot's task competency.


Survey of Image Processing Techniques for Brain Pathology Diagnosis: Challenges and Opportunities.

  • Martin Cenek‎ et al.
  • Frontiers in robotics and AI‎
  • 2018‎

In recent years, a number of new products introduced to the global market combine intelligent robotics, artificial intelligence and smart interfaces to provide powerful tools to support professional decision making. However, while brain disease diagnosis from the brain scan images is supported by imaging robotics, the data analysis to form a medical diagnosis is performed solely by highly trained medical professionals. Recent advances in medical imaging techniques, artificial intelligence, machine learning and computer vision present new opportunities to build intelligent decision support tools to aid the diagnostic process, increase the disease detection accuracy, reduce error, automate the monitoring of patient's recovery, and discover new knowledge about the disease cause and its treatment. This article introduces the topic of medical diagnosis of brain diseases from the MRI based images. We describe existing, multi-modal imaging techniques of the brain's soft tissue and describe in detail how are the resulting images are analyzed by a radiologist to form a diagnosis. Several comparisons between the best results of classifying natural scenes and medical image analysis illustrate the challenges of applying existing image processing techniques to the medical image analysis domain. The survey of medical image processing methods also identified several knowledge gaps, the need for automation of image processing analysis, and the identification of the brain structures in the medical images that differentiate healthy tissue from a pathology. This survey is grounded in the cases of brain tumor analysis and the traumatic brain injury diagnoses, as these two case studies illustrate the vastly different approaches needed to define, extract, and synthesize meaningful information from multiple MRI image sets for a diagnosis. Finally, the article summarizes artificial intelligence frameworks that are built as multi-stage, hybrid, hierarchical information processing work-flows and the benefits of applying these models for medical diagnosis to build intelligent physician's aids with knowledge transparency, expert knowledge embedding, and increased analytical quality.


Learning to reason over scene graphs: a case study of finetuning GPT-2 into a robot language model for grounded task planning.

  • Georgia Chalvatzaki‎ et al.
  • Frontiers in robotics and AI‎
  • 2023‎

Long-horizon task planning is essential for the development of intelligent assistive and service robots. In this work, we investigate the applicability of a smaller class of large language models (LLMs), specifically GPT-2, in robotic task planning by learning to decompose tasks into subgoal specifications for a planner to execute sequentially. Our method grounds the input of the LLM on the domain that is represented as a scene graph, enabling it to translate human requests into executable robot plans, thereby learning to reason over long-horizon tasks, as encountered in the ALFRED benchmark. We compare our approach with classical planning and baseline methods to examine the applicability and generalizability of LLM-based planners. Our findings suggest that the knowledge stored in an LLM can be effectively grounded to perform long-horizon task planning, demonstrating the promising potential for the future application of neuro-symbolic planning methods in robotics.


Exploring Teens as Robot Operators, Users and Witnesses in the Wild.

  • Elin A Björling‎ et al.
  • Frontiers in robotics and AI‎
  • 2020‎

As social robots continue to show promise as assistive technologies, the exploration of appropriate and impactful robot behaviors is key to their eventual success. Teens are a unique population given their vulnerability to stress leading to both mental and physical illness. Much of teen stress stems from school, making the school environment an ideal location for a stress reducing technology. The goal of this mixed-methods study was to understand teens' operation of, and responsiveness to, a robot only capable of movement compared to a robot only capable of speech. Stemming from a human-centered approach, we introduce a Participatory Wizard of Oz (PWoz) interaction method that engaged teens as operators, users, and witnesses in a uniquely transparent interaction. In this paper, we illustrate the use of the PWoz interaction method as well as how it helps identify engaging robot interactions. Using this technique, we present results from a study with 62 teens that includes details of the complexity of teen stress and a significant reduction in negative attitudes toward robots after interactions. We analyzed the teens' interactions with both the verbal and non-verbal robots and identified strong themes of (1) authenticity, (2) empathy, (3) emotional engagement, and (4) imperfection creates connection. Finally, we reflect on the benefits and limitations of the PWoz method and our study to identify next steps toward the design and development of our social robot.


Optical-Tactile Sensor for Lump Detection Using Pneumatic Control.

  • Jonathan Bewley‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

Soft tactile sensors are an attractive solution when robotic systems must interact with delicate objects in unstructured and obscured environments, such as most medical robotics applications. The soft nature of such a system increases both comfort and safety, while the addition of simultaneous soft active actuation provides additional features and can also improve the sensing range. This paper presents the development of a compact soft tactile sensor which is able to measure the profile of objects and, through an integrated pneumatic system, actuate and change the effective stiffness of its tactile contact surface. We report experimental results which demonstrate the sensor's ability to detect lumps on the surface of objects or embedded within a silicone matrix. These results show the potential of this approach as a versatile method of tactile sensing with potential application in medical diagnosis.


Comparison of Human Social Brain Activity During Eye-Contact With Another Human and a Humanoid Robot.

  • Megan S Kelley‎ et al.
  • Frontiers in robotics and AI‎
  • 2020‎

Robot design to simulate interpersonal social interaction is an active area of research with applications in therapy and companionship. Neural responses to eye-to-eye contact in humans have recently been employed to determine the neural systems that are active during social interactions. Whether eye-contact with a social robot engages the same neural system remains to be seen. Here, we employ a similar approach to compare human-human and human-robot social interactions. We assume that if human-human and human-robot eye-contact elicit similar neural activity in the human, then the perceptual and cognitive processing is also the same for human and robot. That is, the robot is processed similar to the human. However, if neural effects are different, then perceptual and cognitive processing is assumed to be different. In this study neural activity was compared for human-to-human and human-to-robot conditions using near infrared spectroscopy for neural imaging, and a robot (Maki) with eyes that blink and move right and left. Eye-contact was confirmed by eye-tracking for both conditions. Increased neural activity was observed in human social systems including the right temporal parietal junction and the dorsolateral prefrontal cortex during human-human eye contact but not human-robot eye-contact. This suggests that the type of human-robot eye-contact used here is not sufficient to engage the right temporoparietal junction in the human. This study establishes a foundation for future research into human-robot eye-contact to determine how elements of robot design and behavior impact human social processing within this type of interaction and may offer a method for capturing difficult to quantify components of human-robot interaction, such as social engagement.


Effectively Quantifying the Performance of Lower-Limb Exoskeletons Over a Range of Walking Conditions.

  • Daniel F N Gordon‎ et al.
  • Frontiers in robotics and AI‎
  • 2018‎

Exoskeletons and other wearable robotic devices have a wide range of potential applications, including assisting patients with walking pathologies, acting as tools for rehabilitation, and enhancing the capabilities of healthy humans. However, applying these devices effectively in a real-world setting can be challenging, as the optimal design features and control commands for an exoskeleton are highly dependent on the current user, task and environment. Consequently, robust metrics and methods for quantifying exoskeleton performance are required. This work presents an analysis of walking data collected for healthy subjects walking with an active pelvis exoskeleton over three assistance scenarios and five walking contexts. Spatial and temporal, kinematic, kinetic and other novel dynamic gait metrics were compared to identify which metrics exhibit desirable invariance properties, and so are good candidates for use as a stability metric over varying walking conditions. Additionally, using a model-based approach, the average metabolic power consumption was calculated for a subset of muscles crossing the hip, knee and ankle joints, and used to analyse how the energy-reducing properties of an exoskeleton are affected by changes in walking context. The results demonstrated that medio-lateral centre of pressure displacement and medio-lateral margin of stability exhibit strong invariance to changes in walking conditions. This suggests that these dynamic gait metrics are optimised in human gait and are potentially suitable metrics for optimising in an exoskeleton control paradigm. The effectiveness of the exoskeleton at reducing human energy expenditure was observed to increase when walking on an incline, where muscles aiding in hip flexion were assisted, but decrease when walking at a slow speed. These results underline the need for adaptive control algorithms for exoskeletons if they are to be used in varied environments.


Opinion Mining From Social Media Short Texts: Does Collective Intelligence Beat Deep Learning?

  • Nicolas Tsapatsoulis‎ et al.
  • Frontiers in robotics and AI‎
  • 2018‎

The era of big data has, among others, three characteristics: the huge amounts of data created every day and in every form by everyday people, artificial intelligence tools to mine information from those data and effective algorithms that allow this data mining in real or close to real time. On the other hand, opinion mining in social media is nowadays an important parameter of social media marketing. Digital media giants such as Google and Facebook developed and employed their own tools for that purpose. These tools are based on publicly available software libraries and tools such as Word2Vec (or Doc2Vec) and fasttext, which emphasize topic modeling and extract low-level features using deep learning approaches. So far, researchers have focused their efforts on opinion mining and especially on sentiment analysis of tweets. This trend reflects the availability of the Twitter API that simplifies automatic data (tweet) collection and testing of the proposed algorithms in real situations. However, if we are really interested in realistic opinion mining we should consider mining opinions from social media platforms such as Facebook and Instagram, which are far more popular among everyday people. The basic purpose of this paper is to compare various kinds of low-level features, including those extracted through deep learning, as in fasttext and Doc2Vec, and keywords suggested by the crowd, called crowd lexicon herein, through a crowdsourcing platform. The application target is sentiment analysis of tweets and Facebook comments on commercial products. We also compare several machine learning methods for the creation of sentiment analysis models and conclude that, even in the era of big data, allowing people to annotate (a small portion of) data would allow effective artificial intelligence tools to be developed using the learning by example paradigm.


Incremental and Parallel Machine Learning Algorithms With Automated Learning Rate Adjustments.

  • Kazuhiro Hishinuma‎ et al.
  • Frontiers in robotics and AI‎
  • 2019‎

The existing machine learning algorithms for minimizing the convex function over a closed convex set suffer from slow convergence because their learning rates must be determined before running them. This paper proposes two machine learning algorithms incorporating the line search method, which automatically and algorithmically finds appropriate learning rates at run-time. One algorithm is based on the incremental subgradient algorithm, which sequentially and cyclically uses each of the parts of the objective function; the other is based on the parallel subgradient algorithm, which uses parts independently in parallel. These algorithms can be applied to constrained nonsmooth convex optimization problems appearing in tasks of learning support vector machines without adjusting the learning rates precisely. The proposed line search method can determine learning rates to satisfy weaker conditions than the ones used in the existing machine learning algorithms. This implies that the two algorithms are generalizations of the existing incremental and parallel subgradient algorithms for solving constrained nonsmooth convex optimization problems. We show that they generate sequences that converge to a solution of the constrained nonsmooth convex optimization problem under certain conditions. The main contribution of this paper is the provision of three kinds of experiment showing that the two algorithms can solve concrete experimental problems faster than the existing algorithms. First, we show that the proposed algorithms have performance advantages over the existing ones in solving a test problem. Second, we compare the proposed algorithms with a different algorithm Pegasos, which is designed to learn with a support vector machine efficiently, in terms of prediction accuracy, value of the objective function, and computational time. Finally, we use one of our algorithms to train a multilayer neural network and discuss its applicability to deep learning.


An Integrated Kinematic Modeling and Experimental Approach for an Active Endoscope.

  • Andrew Isbister‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

Continuum robots are a type of robotic device that are characterized by their flexibility and dexterity, thus making them ideal for an active endoscope. Instead of articulated joints they have flexible backbones that can be manipulated remotely, usually through tendons secured onto structures attached to the backbone. This structure makes them lightweight and ideal to be miniaturized for endoscopic applications. However, their flexibility poses technical challenges in the modeling and control of these devices, especially when closed-loop control is needed, as is the case in medical applications. There are two main approaches in the modeling of continuum robots, the first is to theoretically model the behavior of the backbone and the interaction with the tendons, while the second is to collect experimental observations and retrospectively apply a model that can approximate their apparent behavior. Both approaches are affected by the complexity of continuum robots through either model accuracy/computational time (theoretical method) or missing complex system interactions and lacking expandability (experimental method). In this work, theoretical and experimental descriptions of an endoscopic continuum robot are merged. A simplified yet representative mathematical model of a continuum robot is developed, in which the backbone model is based on Cosserat rod theory and is coupled to the tendon tensions. A robust numerical technique is formulated that has low computational costs. A bespoke experimental facility with precise automated motion of the backbone via the precise control of tendon tension, leads to a robust and detailed description of the system behavior provided through a contactless sensor. The resulting facility achieves a real-world mean positioning error of 3.95% of the backbone length for the examined range of tendon tensions which performs favourably to existing approaches. Moreover, it incorporates hysteresis behavior that could not be predicted by the theoretical modeling alone, reinforcing the benefits of the hybrid approach. The proposed workflow is theoretically grounded and experimentally validated allowing precise prediction of the continuum robot behavior, adhering to realistic observations. Based on this accurate estimation and the fact it is geometrically agnostic enables the proposed model to be scaled for various robotic endoscopes.


Robot Art, in the Eye of the Beholder?: Personalized Metaphors Facilitate Communication of Emotions and Creativity.

  • Martin Cooney‎
  • Frontiers in robotics and AI‎
  • 2021‎

Socially assistive robots are being designed to support people's well-being in contexts such as art therapy where human therapists are scarce, by making art together with people in an appropriate way. A challenge is that various complex and idiosyncratic concepts relating to art, like emotions and creativity, are not yet well understood. Guided by the principles of speculative design, the current article describes the use of a collaborative prototyping approach involving artists and engineers to explore this design space, especially in regard to general and personalized art-making strategies. This led to identifying a goal: to generate representational or abstract art that connects emotionally with people's art and shows creativity. For this, an approach involving personalized "visual metaphors" was proposed, which balances the degree to which a robot's art is influenced by interacting persons. The results of a small user study via a survey provided further insight into people's perceptions: the general design was perceived as intended and appealed; as well, personalization via representational symbols appeared to lead to easier and clearer communication of emotions than via abstract symbols. In closing, the article describes a simplified demo, and discusses future challenges. Thus, the contribution of the current work lies in suggesting how a robot can seek to interact with people in an emotional and creative way through personalized art; thereby, the aim is to stimulate ideation in this promising area and facilitate acceptance of such robots in everyday human environments.


Towards a Machine Vision-Based Yield Monitor for the Counting and Quality Mapping of Shallots.

  • Amanda A Boatswain Jacques‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

In comparison to field crops such as cereals, cotton, hay and grain, specialty crops often require more resources, are usually more sensitive to sudden changes in growth conditions and are known to produce higher value products. Providing quality and quantity assessment of specialty crops during harvesting is crucial for securing higher returns and improving management practices. Technical advancements in computer and machine vision have improved the detection, quality assessment and yield estimation processes for various fruit crops, but similar methods capable of exporting a detailed yield map for vegetable crops have yet to be fully developed. A machine vision-based yield monitor was designed to perform size categorization and continuous counting of shallots in-situ during the harvesting process. Coupled with a software developed in Python, the system is composed of a video logger and a global navigation satellite system. Computer vision analysis is performed within the tractor while an RGB camera collects real-time video data of the crops under natural sunlight conditions. Vegetables are first segmented using Watershed segmentation, detected on the conveyor, and then classified by size. The system detected shallots in a subsample of the dataset with a precision of 76%. The software was also evaluated on its ability to classify the shallots into three size categories. The best performance was achieved in the large class (73%), followed by the small class (59%) and medium class (44%). Based on these results, the occasional occlusion of vegetables and inconsistent lighting conditions were the main factors that hindered performance. Although further enhancements are envisioned for the prototype system, its modular and novel design permits the mapping of a selection of other horticultural crops. Moreover, it has the potential to benefit many producers of small vegetable crops by providing them with useful harvest information in real-time.


Smiles as a Signal of Prosocial Behaviors Toward the Robot in the Therapeutic Setting for Children With Autism Spectrum Disorder.

  • SunKyoung Kim‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

We explored how robot-assisted therapy based on smile analysis may facilitate the prosocial behaviors of children with autism spectrum disorder. Prosocial behaviors, which are actions for the benefit of others, are required to belong to society and increase the quality of life. As smiling is a candidate for predicting prosocial behaviors in robot-assisted therapy, we measured smiles by annotating behaviors that were recorded with video cameras and by classifying facial muscle activities recorded with a wearable device. While interacting with a robot, the participants experienced two situations where participants' prosocial behaviors are expected, which were supporting the robot to walk and helping the robot from falling. We first explored the overall smiles at specific timings and prosocial behaviors. Then, we explored the smiles triggered by a robot and behavior changes before engaging in prosocial behaviors. The results show that the specific timing of smiles and prosocial behaviors increased in the second session of children with autism spectrum disorder. Additionally, a smile was followed by a series of behaviors before prosocial behavior. With a proposed Bayesian model, smiling, or heading predicted prosocial behaviors with higher accuracy compared to other variables. Particularly, voluntary prosocial behaviors were observed after smiling. The findings of this exploratory study imply that smiles might be a signal of prosocial behaviors. We also suggest a probabilistic model for predicting prosocial behaviors based on smile analysis, which could be applied to personalized robot-assisted therapy by controlling a robot's movements to arouse smiles and increase the probability that a child with autism spectrum disorder will engage in prosocial behaviors.


Turning Neural Prosthetics Into Viable Products.

  • Gerald E Loeb‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

Academic researchers concentrate on the scientific and technological feasibility of novel treatments. Investors and commercial partners, however, understand that success depends even more on strategies for regulatory approval, reimbursement, marketing, intellectual property protection and risk management. These considerations are critical for technologically complex and highly invasive treatments that entail substantial costs and risks in small and heterogeneous patient populations. Most implanted neural prosthetic devices for novel applications will be in FDA Device Class III, for which guidance documents have been issued recently. Less invasive devices may be eligible for the recently simplified "de novo" submission routes. We discuss typical timelines and strategies for integrating the regulatory path with approval for reimbursement, securing intellectual property and funding the enterprise, particularly as they might apply to implantable brain-computer interfaces for sensorimotor disabilities that do not yet have a track record of approved products.


Dynamically Tunable Friction via Subsurface Stiffness Modulation.

  • Siavash Sharifi‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

Currently soft robots primarily rely on pneumatics and geometrical asymmetry to achieve locomotion, which limits their working range, versatility, and other untethered functionalities. In this paper, we introduce a novel approach to achieve locomotion for soft robots through dynamically tunable friction to address these challenges, which is achieved by subsurface stiffness modulation (SSM) of a stimuli-responsive component within composite structures. To demonstrate this, we design and fabricate an elastomeric pad made of polydimethylsiloxane (PDMS), which is embedded with a spiral channel filled with a low melting point alloy (LMPA). Once the LMPA strip is melted upon Joule heating, the compliance of the composite structure increases and the friction between the composite surface and the opposing surface increases. A series of experiments and finite element analysis (FEA) have been performed to characterize the frictional behavior of these composite pads and elucidate the underlying physics dominating the tunable friction. We also demonstrate that when these composite structures are properly integrated into soft crawling robots inspired by inchworms and earthworms, the differences in friction of the two ends of these robots through SSM can potentially be used to generate translational locomotion for untethered crawling robots.


Active learning strategies for robotic tactile texture recognition tasks.

  • Shemonto Das‎ et al.
  • Frontiers in robotics and AI‎
  • 2024‎

Accurate texture classification empowers robots to improve their perception and comprehension of the environment, enabling informed decision-making and appropriate responses to diverse materials and surfaces. Still, there are challenges for texture classification regarding the vast amount of time series data generated from robots' sensors. For instance, robots are anticipated to leverage human feedback during interactions with the environment, particularly in cases of misclassification or uncertainty. With the diversity of objects and textures in daily activities, Active Learning (AL) can be employed to minimize the number of samples the robot needs to request from humans, streamlining the learning process. In the present work, we use AL to select the most informative samples for annotation, thus reducing the human labeling effort required to achieve high performance for classifying textures. We also use a sliding window strategy for extracting features from the sensor's time series used in our experiments. Our multi-class dataset (e.g., 12 textures) challenges traditional AL strategies since standard techniques cannot control the number of instances per class selected to be labeled. Therefore, we propose a novel class-balancing instance selection algorithm that we integrate with standard AL strategies. Moreover, we evaluate the effect of sliding windows of two-time intervals (3 and 6 s) on our AL Strategies. Finally, we analyze in our experiments the performance of AL strategies, with and without the balancing algorithm, regarding f1-score, and positive effects are observed in terms of performance when using our proposed data pipeline. Our results show that the training data can be reduced to 70% using an AL strategy regardless of the machine learning model and reach, and in many cases, surpass a baseline performance. Finally, exploring the textures with a 6-s window achieves the best performance, and using either Extra Trees produces an average f1-score of 90.21% in the texture classification data set.


Robotic Telemedicine for Mental Health: A Multimodal Approach to Improve Human-Robot Engagement.

  • Maria R Lima‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

COVID-19 has severely impacted mental health in vulnerable demographics, in particular older adults, who face unprecedented isolation. Consequences, while globally severe, are acutely pronounced in low- and middle-income countries (LMICs) confronting pronounced gaps in resources and clinician accessibility. Social robots are well-recognized for their potential to support mental health, yet user compliance (i.e., trust) demands seamless affective human-robot interactions; natural 'human-like' conversations are required in simple, inexpensive, deployable platforms. We present the design, development, and pilot testing of a multimodal robotic framework fusing verbal (contextual speech) and nonverbal (facial expressions) social cues, aimed to improve engagement in human-robot interaction and ultimately facilitate mental health telemedicine during and beyond the COVID-19 pandemic. We report the design optimization of a hybrid face robot, which combines digital facial expressions based on mathematical affect space mapping with static 3D facial features. We further introduce a contextual virtual assistant with integrated cloud-based AI coupled to the robot's facial representation of emotions, such that the robot adapts its emotional response to users' speech in real-time. Experiments with healthy participants demonstrate emotion recognition exceeding 90% for happy, tired, sad, angry, surprised and stern/disgusted robotic emotions. When separated, stern and disgusted are occasionally transposed (70%+ accuracy overall) but are easily distinguishable from other emotions. A qualitative user experience analysis indicates overall enthusiastic and engaging reception to human-robot multimodal interaction with the new framework. The robot has been modified to enable clinical telemedicine for cognitive engagement with older adults and people with dementia (PwD) in LMICs. The mechanically simple and low-cost social robot has been deployed in pilot tests to support older individuals and PwD at the Schizophrenia Research Foundation (SCARF) in Chennai, India. A procedure for deployment addressing challenges in cultural acceptance, end-user acclimatization and resource allocation is further introduced. Results indicate strong promise to stimulate human-robot psychosocial interaction through the hybrid-face robotic system. Future work is targeting deployment for telemedicine to mitigate the mental health impact of COVID-19 on older adults and PwD in both LMICs and higher income regions.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: