Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 2 showing 21 ~ 40 papers out of 126 papers

Coffee With a Hint of Data: Towards Using Data-Driven Approaches in Personalised Long-Term Interactions.

  • Bahar Irfan‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

While earlier research in human-robot interaction pre-dominantly uses rule-based architectures for natural language interaction, these approaches are not flexible enough for long-term interactions in the real world due to the large variation in user utterances. In contrast, data-driven approaches map the user input to the agent output directly, hence, provide more flexibility with these variations without requiring any set of rules. However, data-driven approaches are generally applied to single dialogue exchanges with a user and do not build up a memory over long-term conversation with different users, whereas long-term interactions require remembering users and their preferences incrementally and continuously and recalling previous interactions with users to adapt and personalise the interactions, known as the lifelong learning problem. In addition, it is desirable to learn user preferences from a few samples of interactions (i.e., few-shot learning). These are known to be challenging problems in machine learning, while they are trivial for rule-based approaches, creating a trade-off between flexibility and robustness. Correspondingly, in this work, we present the text-based Barista Datasets generated to evaluate the potential of data-driven approaches in generic and personalised long-term human-robot interactions with simulated real-world problems, such as recognition errors, incorrect recalls and changes to the user preferences. Based on these datasets, we explore the performance and the underlying inaccuracies of the state-of-the-art data-driven dialogue models that are strong baselines in other domains of personalisation in single interactions, namely Supervised Embeddings, Sequence-to-Sequence, End-to-End Memory Network, Key-Value Memory Network, and Generative Profile Memory Network. The experiments show that while data-driven approaches are suitable for generic task-oriented dialogue and real-time interactions, no model performs sufficiently well to be deployed in personalised long-term interactions in the real world, because of their inability to learn and use new identities, and their poor performance in recalling user-related data.


A Supernumerary Soft Robotic Limb for Reducing Hand-Arm Vibration Syndromes Risks.

  • Andrea S Ciullo‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

The most common causes of the risk of work-related musculoskeletal disorders (WMSD) have been identified as joint overloading, bad postures, and vibrations. In the last two decades, various solutions ranging from human-robot collaborative systems to robotic exoskeletons have been proposed to mitigate them. More recently, a new approach has been proposed with a high potential in this direction: the supernumerary robotic limbs SRLs are additional robotic body parts (e.g., fingers, legs, and arms) that can be worn by the workers, augmenting their natural ability and reducing the risks of injuries. These systems are generally proposed in the literature for their potentiality of augmenting the user's ability, but here we would like to explore this kind of technology as a new generation of (personal) protective equipment. A supernumerary robotic upper limb, for example, allows for indirectly interacting with hazardous objects like chemical products or vibrating tools. In particular, in this work, we present a supernumerary robotic limbs system to reduce the vibration transmitted along the arms and minimize the load on the upper limb joints. For this purpose, an off-the-shelf wearable gravity compensation system is integrated with a soft robotic hand and a custom damping wrist, designed starting from theoretical considerations on a mass-spring-damper model. The real efficacy of the system was experimentally tested within a simulated industrial work environment, where seven subjects performed a drilling task on two different materials. Experimental analysis was conducted according to the ISO-5349. Results showed a reduction from 40 to 60% of vibration transmission with respect to the traditional hand drilling using the presented SRL system without compromising the time performance.


A Comparison of Social Robot to Tablet and Teacher in a New Script Learning Context.

  • Zhanel Zhexenova‎ et al.
  • Frontiers in robotics and AI‎
  • 2020‎

This research occurred in a special context where Kazakhstan's recent decision to switch from Cyrillic to the Latin-based alphabet has resulted in challenges connected to teaching literacy, addressing a rare combination of research hypotheses and technical objectives about language learning. Teachers are not necessarily trained to teach the new alphabet, and this could result in a challenge for children with learning difficulties. Prior research studies in Human-Robot Interaction (HRI) have proposed the use of a robot to teach handwriting to children (Hood et al., 2015; Lemaignan et al., 2016). Drawing on the Kazakhstani case, our study takes an interdisciplinary approach by bringing together smart solutions from robotics, computer vision areas, and educational frameworks, language, and cognitive studies that will benefit diverse groups of stakeholders. In this study, a human-robot interaction application is designed to help primary school children learn both a newly-adopted script and also its handwriting system. The setup involved an experiment with 62 children between the ages of 7-9 years old, across three conditions: a robot and a tablet, a tablet only, and a teacher. Based on the paradigm-learning by teaching-the study showed that children improved their knowledge of the Latin script by interacting with a robot. Findings reported that children gained similar knowledge of a new script in all three conditions without gender effect. In addition, children's likeability ratings and positive mood change scores demonstrate significant benefits favoring the robot over a traditional teacher and tablet only approaches.


A Mirror-Based Active Vision System for Underwater Robots: From the Design to Active Object Tracking Application.

  • Noel Cortés-Pérez‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

A mirror-based active system capable of changing the view's direction of a pre-existing fixed camera is presented. The aim of this research work is to extend the perceptual tracking capabilities of an underwater robot without altering its structure. The ability to control the view's direction allows the robot to explore its entire surroundings without any actual displacement, which can be useful for more effective motion planning and for different navigation strategies, such as object tracking and/or obstacle evasion, which are of great importance for natural preservation in environments as complex and fragile as coral reefs. Active vision systems based on mirrors had been used mainly in terrestrial platforms to capture the motion of fast projectiles using high-speed cameras of considerable size and weight, but they had not been used on underwater platforms. In this sense, our approach incorporates a lightweight design adapted to an underwater robot using affordable and easy-access technology (i.e., 3D printing). Our active system consists of two arranged mirrors, one of which remains static in front of the robot's camera, while the orientation of the second mirror is controlled by two servomotors. Object tracking is performed by using only the pixels contained on the homography of a defined area in the active mirror. HSV color space is used to reduce lighting change effects. Since color and geometry information of the tracking object are previously known, a window filter is applied over the H-channel for color blobs detection, then, noise is filtered and the object's centroid is estimated. If the object is lost, a Kalman filter is applied to predict its position. Finally, with this information, an image PD controller computes the servomotor articular values. We have carried out experiments in real environments, testing our active vision system in an object-tracking application where an artificial object is manually displaced on the periphery of the robot and the mirror system is automatically reconfigured to keep such object focused by the camera, having satisfactory results in real time for detecting objects of low complexity and in poor lighting conditions.


Telerobotic Operation of Intensive Care Unit Ventilators.

  • Balazs P Vagvolgyi‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

Since the first reports of a novel coronavirus (SARS-CoV-2) in December 2019, over 33 million people have been infected worldwide and approximately 1 million people worldwide have died from the disease caused by this virus, COVID-19. In the United States alone, there have been approximately 7 million cases and over 200,000 deaths. This outbreak has placed an enormous strain on healthcare systems and workers. Severe cases require hospital care, and 8.5% of patients require mechanical ventilation in an intensive care unit (ICU). One major challenge is the necessity for clinical care personnel to don and doff cumbersome personal protective equipment (PPE) in order to enter an ICU unit to make simple adjustments to ventilator settings. Although future ventilators and other ICU equipment may be controllable remotely through computer networks, the enormous installed base of existing ventilators do not have this capability. This paper reports the development of a simple, low cost telerobotic system that permits adjustment of ventilator settings from outside the ICU. The system consists of a small Cartesian robot capable of operating a ventilator touch screen with camera vision control via a wirelessly connected tablet master device located outside the room. Engineering system tests demonstrated that the open-loop mechanical repeatability of the device was 7.5 mm, and that the average positioning error of the robotic finger under visual servoing control was 5.94 mm. Successful usability tests in a simulated ICU environment were carried out and are reported. In addition to enabling a significant reduction in PPE consumption, the prototype system has been shown in a preliminary evaluation to significantly reduce the total time required for a respiratory therapist to perform typical setting adjustments on a commercial ventilator, including donning and doffing PPE, from 271 to 109 s.


Simultaneous Material Segmentation and 3D Reconstruction in Industrial Scenarios.

  • Cheng Zhao‎ et al.
  • Frontiers in robotics and AI‎
  • 2020‎

Recognizing material categories is one of the core challenges in robotic nuclear waste decommissioning. All nuclear waste should be sorted and segregated according to its materials, and then different disposal post-process can be applied. In this paper, we propose a novel transfer learning approach to learn boundary-aware material segmentation from a meta-dataset and weakly annotated data. The proposed method is data-efficient, leveraging a publically available dataset for general computer vision tasks and coarsely labeled material recognition data, with only a limited number of fine pixel-wise annotations required. Importantly, our approach is integrated with a Simultaneous Localization and Mapping (SLAM) system to fuse the per-frame understanding delicately into a 3D global semantic map to facilitate robot manipulation in self-occluded object heaps or robot navigation in disaster zones. We evaluate the proposed method on the Materials in Context dataset over 23 categories and that our integrated system delivers quasi-real-time 3D semantic mapping with high-resolution images. The trained model is also verified in an industrial environment as part of the EU RoMaNs project, and promising qualitative results are presented. A video demo and the newly generated data can be found at the project website (Supplementary Material).


Adaptive and Energy-Efficient Optimal Control in CPGs Through Tegotae-Based Feedback.

  • Riccardo Zamboni‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

To obtain biologically inspired robotic control, the architecture of central pattern generators (CPGs) has been extensively adopted to generate periodic patterns for locomotor control. This is attributed to the interesting properties of nonlinear oscillators. Although sensory feedback in CPGs is not necessary for the generation of patterns, it plays a central role in guaranteeing adaptivity to environmental conditions. Nonetheless, its inclusion significantly modifies the dynamics of the CPG architecture, which often leads to bifurcations. For instance, the force feedback can be exploited to derive information regarding the state of the system. In particular, the Tegotae approach can be adopted by coupling proprioceptive information with the state of the oscillation itself in the CPG model. This paper discusses this policy with respect to other types of feedback; it provides higher adaptivity and an optimal energy efficiency for reflex-like actuation. We believe this is the first attempt to analyse the optimal energy efficiency along with the adaptivity of the Tegotae approach.


Improved Continuum Joint Configuration Estimation Using a Linear Combination of Length Measurements and Optimization of Sensor Placement.

  • Levi Rupert‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

This paper presents methods for placing length sensors on a soft continuum robot joint as well as a novel configuration estimation method that drastically minimizes configuration estimation error. The methods utilized for placing sensors along the length of the joint include a single joint length sensor, sensors lined end-to-end, sensors that overlap according to a heuristic, and sensors that are placed by an optimization that we describe in this paper. The methods of configuration estimation include directly relating sensor length to a segment of the joint's angle, using an equal weighting of overlapping sensors that cover a joint segment, and using a weighted linear combination of all sensors on the continuum joint. The weights for the linear combination method are determined using robust linear regression. Using a kinematic simulation we show that placing three or more overlapping sensors and estimating the configuration with a linear combination of sensors resulted in a median error of 0.026% of the max range of motion or less. This is over a 500 times improvement as compared to using a single sensor to estimate the joint configuration. This error was computed across 80 simulated robots of different lengths and ranges of motion. We also found that the fully optimized sensor placement performed only marginally better than the placement of sensors according to the heuristic. This suggests that the use of a linear combination of sensors, with weights found using linear regression is more important than the placement of the overlapping sensors. Further, using the heuristic significantly simplifies the application of these techniques when designing for hardware.


Human Lower Limb Joint Biomechanics in Daily Life Activities: A Literature Based Requirement Analysis for Anthropomorphic Robot Design.

  • Martin Grimmer‎ et al.
  • Frontiers in robotics and AI‎
  • 2020‎

Daily human activity is characterized by a broad variety of movement tasks. This work summarizes the sagittal hip, knee, and ankle joint biomechanics for a broad range of daily movements, based on previously published literature, to identify requirements for robotic design. Maximum joint power, moment, angular velocity, and angular acceleration, as well as the movement-related range of motion and the mean absolute power were extracted, compared, and analyzed for essential and sportive movement tasks. We found that the full human range of motion is required to mimic human like performance and versatility. In general, sportive movements were found to exhibit the highest joint requirements in angular velocity, angular acceleration, moment, power, and mean absolute power. However, at the hip, essential movements, such as recovery, had comparable or even higher requirements. Further, we found that the moment and power demands were generally higher in stance, while the angular velocity and angular acceleration were mostly higher or equal in swing compared to stance for locomotion tasks. The extracted requirements provide a novel comprehensive overview that can help with the dimensioning of actuators enabling tailored assistance or rehabilitation for wearable lower limb robots, and to achieve essential, sportive or augmented performances that exceed natural human capabilities with humanoid robots.


Engagement in Human-Agent Interaction: An Overview.

  • Catharine Oertel‎ et al.
  • Frontiers in robotics and AI‎
  • 2020‎

Engagement is a concept of the utmost importance in human-computer interaction, not only for informing the design and implementation of interfaces, but also for enabling more sophisticated interfaces capable of adapting to users. While the notion of engagement is actively being studied in a diverse set of domains, the term has been used to refer to a number of related, but different concepts. In fact it has been referred to across different disciplines under different names and with different connotations in mind. Therefore, it can be quite difficult to understand what the meaning of engagement is and how one study relates to another one accordingly. Engagement has been studied not only in human-human, but also in human-agent interactions i.e., interactions with physical robots and embodied virtual agents. In this overview article we focus on different factors involved in engagement studies, distinguishing especially between those studies that address task and social engagement, involve children and adults, are conducted in a lab or aimed for long term interaction. We also present models for detecting engagement and for generating multimodal behaviors to show engagement.


A Systematic Review of Robotic Rehabilitation for Cognitive Training.

  • Fengpei Yuan‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

A large and increasing number of people around the world experience cognitive disability. Rehabilitation robotics has provided promising training and assistance approaches to mitigate cognitive deficits. In this article, we carried out a systematic review on recent developments in robot-assisted cognitive training. We included 99 articles in this work and described their applications, enabling technologies, experiments, and products. We also conducted a meta analysis on the articles that evaluated robot-assisted cognitive training protocol with primary end users (i.e., people with cognitive disability). We identified major limitations in current robotics rehabilitation for cognitive training, including the small sample size, non-standard measurement of training and uncontrollable factors. There are still multifaceted challenges in this field, including ethical issues, user-centered (or stakeholder-centered) design, the reliability, trust, and cost-effectiveness, personalization of the robot-assisted cognitive training system. Future research shall also take into consideration human-robot collaboration and social cognition to facilitate a natural human-robot interaction.


An Opposite-Bending-and-Extension Soft Robotic Manipulator for Delicate Grasping in Shallow Water.

  • Zheyuan Gong‎ et al.
  • Frontiers in robotics and AI‎
  • 2019‎

Collecting seafood animals (such as sea cucumbers, sea echini, scallops, etc.) cultivated in shallow water (water depth: ~30 m) is a profitable and an emerging field that requires robotics for replacing human divers. Soft robotics have several promising features (e.g., safe contact with the objects, lightweight, etc.) for performing such a task. In this paper, we implement a soft manipulator with an opposite-bending-and-extension structure. A simple and rapid inverse kinematics method is proposed to control the spatial location and trajectory of the underwater soft manipulator's end effector. We introduce the actuation hardware of the prototype, and then characterize the trajectory and workspace. We find that the prototype can well track fundamental trajectories such as a line and an arc. Finally, we construct a small underwater robot and demonstrate that the underwater soft manipulator successfully collects multiple irregular shaped seafood animals of different sizes and stiffness at the bottom of the natural oceanic environment (water depth: ~10 m).


Predicting the metabolic cost of exoskeleton-assisted squatting using foot pressure features and machine learning.

  • Sruthi Ramadurai‎ et al.
  • Frontiers in robotics and AI‎
  • 2023‎

Introduction: Recent studies found that wearable exoskeletons can reduce physical effort and fatigue during squatting. In particular, subject-specific assistance helped to significantly reduce physical effort, shown by reduced metabolic cost, using human-in-the-loop optimization of the exoskeleton parameters. However, measuring metabolic cost using respiratory data has limitations, such as long estimation times, presence of noise, and user discomfort. A recent study suggests that foot contact forces can address those challenges and be used as an alternative metric to the metabolic cost to personalize wearable robot assistance during walking. Methods: In this study, we propose that foot center of pressure (CoP) features can be used to estimate the metabolic cost of squatting using a machine learning method. Five subjects' foot pressure and metabolic cost data were collected as they performed squats with an ankle exoskeleton at different assistance conditions in our prior study. In this study, we extracted statistical features from the CoP squat trajectories and fed them as input to a random forest model, with the metabolic cost as the output. Results: The model predicted the metabolic cost with a mean error of 0.55 W/kg on unseen test data, with a high correlation (r = 0.89, p < 0.01) between the true and predicted cost. The features of the CoP trajectory in the medial-lateral direction of the foot (xCoP), which relate to ankle eversion-inversion, were found to be important and highly correlated with metabolic cost. Conclusion: Our findings indicate that increased ankle eversion (outward roll of the ankle), which reflects a suboptimal squatting strategy, results in higher metabolic cost. Higher ankle eversion has been linked with the etiology of chronic lower limb injuries. Hence, a CoP-based cost function in human-in-the-loop optimization could offer several advantages, such as reduced estimation time, injury risk mitigation, and better user comfort.


Autonomous Exploration of Small Bodies Toward Greater Autonomy for Deep Space Missions.

  • Issa A D Nesnas‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

Autonomy is becoming increasingly important for the robotic exploration of unpredictable environments. One such example is the approach, proximity operation, and surface exploration of small bodies. In this article, we present an overview of an estimation framework to approach and land on small bodies as a key functional capability for an autonomous small-body explorer. We use a multi-phase perception/estimation pipeline with interconnected and overlapping measurements and algorithms to characterize and reach the body, from millions of kilometers down to its surface. We consider a notional spacecraft design that operates across all phases from approach to landing and to maneuvering on the surface of the microgravity body. This SmallSat design makes accommodations to simplify autonomous surface operations. The estimation pipeline combines state-of-the-art techniques with new approaches to estimating the target's unknown properties across all phases. Centroid and light-curve algorithms estimate the body-spacecraft relative trajectory and rotation, respectively, using a priori knowledge of the initial relative orbit. A new shape-from-silhouette algorithm estimates the pole (i.e., rotation axis) and the initial visual hull that seeds subsequent feature tracking as the body gets more resolved in the narrow field-of-view imager. Feature tracking refines the pole orientation and shape of the body for estimating initial gravity to enable safe close approach. A coarse-shape reconstruction algorithm is used to identify initial landable regions whose hazardous nature would subsequently be assessed by dense 3D reconstruction. Slope stability, thermal, occlusion, and terra-mechanical hazards would be assessed on densely reconstructed regions and continually refined prior to landing. We simulated a mission scenario for approaching a hypothetical small body whose motion and shape were unknown a priori, starting from thousands of kilometers down to 20 km. Results indicate the feasibility of recovering the relative body motion and shape solely relying on onboard measurements and estimates with their associated uncertainties and without human input. Current work continues to mature and characterize the algorithms for the last phases of the estimation framework to land on the surface.


A Configurable Architecture for Two Degree-of-Freedom Variable Stiffness Actuators to Match the Compliant Behavior of Human Joints.

  • Simon Lemerle‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

Living beings modulate the impedance of their joints to interact proficiently, robustly, and safely with the environment. These observations inspired the design of soft articulated robots with the development of Variable Impedance and Variable Stiffness Actuators. However, designing them remains a challenging task due to their mechanical complexity, encumbrance, and weight, but also due to the different specifications that the wide range of applications requires. For instance, as prostheses or parts of humanoid systems, there is currently a need for multi-degree-of-freedom joints that have abilities similar to those of human articulations. Toward this goal, we propose a new compact and configurable design for a two-degree-of-freedom variable stiffness joint that can match the passive behavior of a human wrist and ankle. Using only three motors, this joint can control its equilibrium orientation around two perpendicular axes and its overall stiffness as a one-dimensional parameter, like the co-contraction of human muscles. The kinematic architecture builds upon a state-of-the-art rigid parallel mechanism with the addition of nonlinear elastic elements to allow the control of the stiffness. The mechanical parameters of the proposed system can be optimized to match desired passive compliant behaviors and to fit various applications (e.g., prosthetic wrists or ankles, artificial wrists, etc.). After describing the joint structure, we detail the kinetostatic analysis to derive the compliant behavior as a function of the design parameters and to prove the variable stiffness ability of the system. Besides, we provide sets of design parameters to match the passive compliance of either a human wrist or ankle. Moreover, to show the versatility of the proposed joint architecture and as guidelines for the future designer, we describe the influence of the main design parameters on the system stiffness characteristic and show the potential of the design for more complex applications.


Autonomous Robotic Point-of-Care Ultrasound Imaging for Monitoring of COVID-19-Induced Pulmonary Diseases.

  • Lidia Al-Zogbi‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

The COVID-19 pandemic has emerged as a serious global health crisis, with the predominant morbidity and mortality linked to pulmonary involvement. Point-of-Care ultrasound (POCUS) scanning, becoming one of the primary determinative methods for its diagnosis and staging, requires, however, close contact of healthcare workers with patients, therefore increasing the risk of infection. This work thus proposes an autonomous robotic solution that enables POCUS scanning of COVID-19 patients' lungs for diagnosis and staging. An algorithm was developed for approximating the optimal position of an ultrasound probe on a patient from prior CT scans to reach predefined lung infiltrates. In the absence of prior CT scans, a deep learning method was developed for predicting 3D landmark positions of a human ribcage given a torso surface model. The landmarks, combined with the surface model, are subsequently used for estimating optimal ultrasound probe position on the patient for imaging infiltrates. These algorithms, combined with a force-displacement profile collection methodology, enabled the system to successfully image all points of interest in a simulated experimental setup with an average accuracy of 20.6 ± 14.7 mm using prior CT scans, and 19.8 ± 16.9 mm using only ribcage landmark estimation. A study on a full torso ultrasound phantom showed that autonomously acquired ultrasound images were 100% interpretable when using force feedback with prior CT and 88% with landmark estimation, compared to 75 and 58% without force feedback, respectively. This demonstrates the preliminary feasibility of the system, and its potential for offering a solution to help mitigate the spread of COVID-19 in vulnerable environments.


Emotion Recognition for Human-Robot Interaction: Recent Advances and Future Perspectives.

  • Matteo Spezialetti‎ et al.
  • Frontiers in robotics and AI‎
  • 2020‎

A fascinating challenge in the field of human-robot interaction is the possibility to endow robots with emotional intelligence in order to make the interaction more intuitive, genuine, and natural. To achieve this, a critical point is the capability of the robot to infer and interpret human emotions. Emotion recognition has been widely explored in the broader fields of human-machine interaction and affective computing. Here, we report recent advances in emotion recognition, with particular regard to the human-robot interaction context. Our aim is to review the state of the art of currently adopted emotional models, interaction modalities, and classification strategies and offer our point of view on future developments and critical issues. We focus on facial expressions, body poses and kinematics, voice, brain activity, and peripheral physiological responses, also providing a list of available datasets containing data from these modalities.


Toward Conductive Polymer-Based Soft Milli-Robots for Vacuum Applications.

  • Amine Benouhiba‎ et al.
  • Frontiers in robotics and AI‎
  • 2019‎

For the last two decades, the development of conducting polymers (CP) as artificial muscles, by materials researchers and chemists, has made establishing a reliable and repeatable synthesis of such materials possible. CP-based milli-robots were mostly unknown in soft robotics, however, today, they play a vital role in robotics and smart materials forums. Indeed, this subclass of soft robots has reached a crucial moment in their history, a moment where they can display rather interesting features, based on established foundations in terms of modeling, control, sensing, and planning in various applications. The purpose of this paper is to present the potential of conductive polymer-based soft milli-robots as high-performance devices for vacuum applications. To that end, a trilayer polypyrrole-based actuator was first used inside a scanning electron microscope (SEM), characterized for different applied voltages, over a relatively long period. Additionally, the tip positioning of the cantilever was also controlled using a closed-loop control. Furthermore, as a proof of concept for more complex soft milli-robots, an S-shaped soft milli-robot was modeled, using a hybrid model comprised of two models; a multi-physics model and a kinematic model. It was then fabricated using laser machining and finally characterized using its tip displacement. polypyrrole-based soft milli-robots proved to have tremendous potential as high-performance soft robots at the microscale for a wide range of applications, including SEM micro-manipulation as well as biomedical applications.


Stroke Affected Lower Limbs Rehabilitation Combining Virtual Reality With Tactile Feedback.

  • Alexander V Zakharov‎ et al.
  • Frontiers in robotics and AI‎
  • 2020‎

In our study, we tested a combination of virtual reality (VR) and robotics in the original adjuvant method of post-stroke lower limb walk restoration in acute phase using a simulation with visual and tactile biofeedback based on VR immersion and physical impact to the soles of patients. The duration of adjuvant therapy was 10 daily sessions of 15 min each. The study showed the following significant rehabilitation progress in Control (N = 27) vs. Experimental (N = 35) groups, respectively: 1.56 ± 0.29 (mean ± SD) and 2.51 ± 0.31 points by Rivermead Mobility Index (p = 0.0286); 2.15 ± 0.84 and 6.29 ± 1.20 points by Fugl-Meyer Assessment Lower Extremities scale (p = 0.0127); and 6.19 ± 1.36 and 13.49 ± 2.26 points by Berg Balance scale (p = 0.0163). P-values were obtained by the Mann-Whitney U test. The simple and intuitive mechanism of rehabilitation, including through the use of sensory and semantic components, allows the therapy of a patient with diaschisis and afferent and motor aphasia. Safety of use allows one to apply the proposed method of therapy at the earliest stage of a stroke. We consider the main finding of this study that the application of rehabilitation with implicit interaction with VR environment produced by the robotics action has measurable significant influence on the restoration of the affected motor function of the lower limbs compared with standard rehabilitation therapy.


Shared Control of a Powered Exoskeleton and Functional Electrical Stimulation Using Iterative Learning.

  • Vahidreza Molazadeh‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

A hybrid exoskeleton comprising a powered exoskeleton and functional electrical stimulation (FES) is a promising technology for restoration of standing and walking functions after a neurological injury. Its shared control remains challenging due to the need to optimally distribute joint torques among FES and the powered exoskeleton while compensating for the FES-induced muscle fatigue and ensuring performance despite highly nonlinear and uncertain skeletal muscle behavior. This study develops a bi-level hierarchical control design for shared control of a powered exoskeleton and FES to overcome these challenges. A higher-level neural network-based iterative learning controller (NNILC) is derived to generate torques needed to drive the hybrid system. Then, a low-level model predictive control (MPC)-based allocation strategy optimally distributes the torque contributions between FES and the exoskeleton's knee motors based on the muscle fatigue and recovery characteristics of a participant's quadriceps muscles. A Lyapunov-like stability analysis proves global asymptotic tracking of state-dependent desired joint trajectories. The experimental results on four non-disabled participants validate the effectiveness of the proposed NNILC-MPC framework. The root mean square error (RMSE) of the knee joint and the hip joint was reduced by 71.96 and 74.57%, respectively, in the fourth iteration compared to the RMSE in the 1st sit-to-stand iteration.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: