Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 1 showing 1 ~ 20 papers out of 126 papers

Underwater Robotics Competitions: The European Robotics League Emergency Robots Experience With FeelHippo AUV.

  • Matteo Franchi‎ et al.
  • Frontiers in robotics and AI‎
  • 2020‎

Underwater robots are nowadays employed for many different applications; during the last decades, a wide variety of robotic vehicles have been developed by both companies and research institutes, different in shape, size, navigation system, and payload. While the market needs to constitute the real benchmark for commercial vehicles, novel approaches developed during research projects represent the standard for academia and research bodies. An interesting opportunity for the performance comparison of autonomous vehicles lies in robotics competitions, which serve as an useful testbed for state-of-the-art underwater technologies and a chance for the constructive evaluation of strengths and weaknesses of the participating platforms. In this framework, over the last few years, the Department of Industrial Engineering of the University of Florence participated in multiple robotics competitions, employing different vehicles. In particular, in September 2017 the team from the University of Florence took part in the European Robotics League Emergency Robots competition held in Piombino (Italy) using FeelHippo AUV, a compact and lightweight Autonomous Underwater Vehicle (AUV). Despite its size, FeelHippo AUV possesses a complete navigation system, able to offer good navigation accuracy, and diverse payload acquisition and analysis capabilities. This paper reports the main field results obtained by the team during the competition, with the aim of showing how it is possible to achieve satisfying performance (in terms of both navigation precision and payload data acquisition and processing) even with small-size vehicles such as FeelHippo AUV.


Augmented Reality Meets Artificial Intelligence in Robotics: A Systematic Review.

  • Zahraa Bassyouni‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

Recently, advancements in computational machinery have facilitated the integration of artificial intelligence (AI) to almost every field and industry. This fast-paced development in AI and sensing technologies have stirred an evolution in the realm of robotics. Concurrently, augmented reality (AR) applications are providing solutions to a myriad of robotics applications, such as demystifying robot motion intent and supporting intuitive control and feedback. In this paper, research papers combining the potentials of AI and AR in robotics over the last decade are presented and systematically reviewed. Four sources for data collection were utilized: Google Scholar, Scopus database, the International Conference on Robotics and Automation 2020 proceedings, and the references and citations of all identified papers. A total of 29 papers were analyzed from two perspectives: a theme-based perspective showcasing the relation between AR and AI, and an application-based analysis highlighting how the robotics application was affected. These two sections are further categorized based on the type of robotics platform and the type of robotics application, respectively. We analyze the work done and highlight some of the prevailing limitations hindering the field. Results also explain how AR and AI can be combined to solve the model-mismatch paradigm by creating a closed feedback loop between the user and the robot. This forms a solid base for increasing the efficiency of the robotic application and enhancing the user's situational awareness, safety, and acceptance of AI robots. Our findings affirm the promising future for robust integration of AR and AI in numerous robotic applications.


Approaches for Efficiently Detecting Frontier Cells in Robotics Exploration.

  • Phillip Quin‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

Many robot exploration algorithms that are used to explore office, home, or outdoor environments, rely on the concept of frontier cells. Frontier cells define the border between known and unknown space. Frontier-based exploration is the process of repeatedly detecting frontiers and moving towards them, until there are no more frontiers and therefore no more unknown regions. The faster frontier cells can be detected, the more efficient exploration becomes. This paper proposes several algorithms for detecting frontiers. The first is called Naïve Active Area (NaïveAA) frontier detection and achieves frontier detection in constant time by only evaluating the cells in the active area defined by scans taken. The second algorithm is called Expanding-Wavefront Frontier Detection (EWFD) and uses frontiers from the previous timestep as a starting point for searching for frontiers in newly discovered space. The third approach is called Frontier-Tracing Frontier Detection (FTFD) and also uses the frontiers from the previous timestep as well as the endpoints of the scan, to determine the frontiers at the current timestep. Algorithms are compared to state-of-the-art algorithms such as Naïve, WFD, and WFD-INC. NaïveAA is shown to operate in constant time and therefore is suitable as a basic benchmark for frontier detection algorithms. EWFD and FTFD are found to be significantly faster than other algorithms.


Neural Networks Predicting Microbial Fuel Cells Output for Soft Robotics Applications.

  • Michail-Antisthenis Tsompanas‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

The development of biodegradable soft robotics requires an appropriate eco-friendly source of energy. The use of Microbial Fuel Cells (MFCs) is suggested as they can be designed completely from soft materials with little or no negative effects to the environment. Nonetheless, their responsiveness and functionality is not strictly defined as in other conventional technologies, i.e. lithium batteries. Consequently, the use of artificial intelligence methods in their control techniques is highly recommended. The use of neural networks, namely a nonlinear autoregressive network with exogenous inputs was employed to predict the electrical output of an MFC, given its previous outputs and feeding volumes. Thus, predicting MFC outputs as a time series, enables accurate determination of feeding intervals and quantities required for sustenance that can be incorporated in the behavioural repertoire of a soft robot.


Machine Gaze: Self-Identification Through Play With a computer Vision-Based Projection and Robotics System.

  • Ray Lc‎ et al.
  • Frontiers in robotics and AI‎
  • 2020‎

Children begin to develop self-awareness when they associate images and abilities with themselves. Such "construction of self" continues throughout adult life as we constantly cycle through different forms of self-awareness, seeking, to redefine ourselves. Modern technologies like screens and artificial intelligence threaten to alter our development of self-awareness, because children and adults are exposed to machines, tele-presences, and displays that increasingly become part of human identity. We use avatars, invent digital lives, and augment ourselves with digital imprints that depart from reality, making the development of self-identification adjust to digital technologies that blur the boundary between us and our devices. To empower children and adults to see themselves and artificially intelligent machines as separately aware entities, we created the persona of a salvaged supermarket security camera refurbished and enhanced with the power of computer vision to detect human faces, and project them on a large-scale 3D face sculpture. The surveillance camera system moves its head to point to human faces at times, but at other times, humans have to get its attention by moving to its vicinity, creating a dynamic where audiences attempt to see their own faces on the sculpture by gazing into the machine's eye. We found that audiences began attaining an understanding of machines that interpret our faces as separate from our identities, with their own agendas and agencies that show by the way they serendipitously interact with us. The machine-projected images of us are their own interpretation rather than our own, distancing us from our digital analogs. In the accompanying workshop, participants learn about how computer vision works by putting on disguises in order to escape from an algorithm detecting them as the same person by analyzing their faces. Participants learn that their own agency affects how machines interpret them, gaining an appreciation for the way their own identities and machines' awareness of them can be separate entities that can be manipulated for play. Together the installation and workshop empower children and adults to think beyond identification with digital technology to recognize the machine's own interpretive abilities that lie separate from human being's own self-awareness.


Stereoscopic Near-Infrared Fluorescence Imaging: A Proof of Concept Toward Real-Time Depth Perception in Surgical Robotics.

  • Maxwell J Munford‎ et al.
  • Frontiers in robotics and AI‎
  • 2019‎

The increasing use of surgical robotics has provoked the necessity for new medical imaging methods. Many assistive surgical robotic systems influence the surgeon's movements based on a model of constraints and boundaries driven by anatomy. This study aims to demonstrate that Near-Infrared Fluorescence (NIRF) imaging could be applied in surgical applications to provide subsurface mapping of capillaries beneath soft tissue as a method for imaging active constraints. The manufacture of a system for imaging in the near-infrared wavelength range is presented, followed by a description of computational methods for stereo-post-processing and data acquisition and testing used to demonstrate that the proposed methods are viable. The results demonstrate that it is possible to use NIRF for the imaging of a capillary submersed up to 11 mm below a soft tissue phantom, over a range of angles from 0° through 45°. Phantom depth has been measured to an accuracy of ±3 mm and phantom angle to a constant accuracy of ±1.6°. These findings suggest that NIRF could be used for the next generation of medical imaging in surgical robotics and provide a basis for future research into real-time depth perception in the mapping of active constraints.


Expectations and Perceptions of Healthcare Professionals for Robot Deployment in Hospital Environments During the COVID-19 Pandemic.

  • Sergio D Sierra Marín‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

Several challenges to guarantee medical care have been exposed during the current COVID-19 pandemic. Although the literature has shown some robotics applications to overcome the potential hazards and risks in hospital environments, the implementation of those developments is limited, and few studies measure the perception and the acceptance of clinicians. This work presents the design and implementation of several perception questionnaires to assess healthcare provider's level of acceptance and education toward robotics for COVID-19 control in clinic scenarios. Specifically, 41 healthcare professionals satisfactorily accomplished the surveys, exhibiting a low level of knowledge about robotics applications in this scenario. Likewise, the surveys revealed that the fear of being replaced by robots remains in the medical community. In the Colombian context, 82.9% of participants indicated a positive perception concerning the development and implementation of robotics in clinic environments. Finally, in general terms, the participants exhibited a positive attitude toward using robots and recommended them to be used in the current panorama.


Embodied Computational Evolution: Feedback Between Development and Evolution in Simulated Biorobots.

  • Joshua Hawthorne-Madell‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

Given that selection removes genetic variance from evolving populations, thereby reducing exploration opportunities, it is important to find mechanisms that create genetic variation without the disruption of adapted genes and genomes caused by random mutation. Just such an alternative is offered by random epigenetic error, a developmental process that acts on materials and parts expressed by the genome. In this system of embodied computational evolution, simulated within a physics engine, epigenetic error was instantiated in an explicit genotype-to-phenotype map as transcription error at the initiation of gene expression. The hypothesis was that transcription error would create genetic variance by shielding genes from the direct impact of selection, creating, in the process, masquerading genomes. To test this hypothesis, populations of simulated embodied biorobots and their developmental systems were evolved under steady directional selection as equivalent rates of random mutation and random transcriptional error were covaried systematically in an 11 × 11 fully factorial experimental design. In each of the 121 different experimental conditions (unique combinations of mutation and transcription error), the same set of 10 randomly created replicate populations of 60 individuals were evolved. Selection for the improved locomotor behavior of individuals led to increased mean fitness of populations over 100 generations at nearly all levels and combinations of mutation and transcription error. When the effects of both types of error were partitioned statistically, increasing transcription error was shown to increase the final genetic variance of populations, incurring a fitness cost but acting on variance independently and differently from genetic mutation. Thus, random epigenetic errors in development feed back through selection of individuals with masquerading genomes to the population's genetic variance over generational time. Random developmental processes offer an additional mechanism for exploration by increasing genetic variation in the face of steady, directional selection.


Exploiting Robot Hand Compliance and Environmental Constraints for Edge Grasps.

  • Joao Bimbo‎ et al.
  • Frontiers in robotics and AI‎
  • 2019‎

This paper presents a method to grasp objects that cannot be picked directly from a table, using a soft, underactuated hand. These grasps are achieved by dragging the object to the edge of a table, and grasping it from the protruding part, performing so-called slide-to-edge grasps. This type of approach, which uses the environment to facilitate the grasp, is named Environmental Constraint Exploitation (ECE), and has been shown to improve the robustness of grasps while reducing the planning effort. The paper proposes two strategies, namely Continuous Slide and Grasp and Pivot and Re-Grasp, that are designed to deal with different objects. In the first strategy, the hand is positioned over the object and assumed to stick to it during the sliding until the edge, where the fingers wrap around the object and pick it up. In the second strategy, instead, the sliding motion is performed using pivoting, and thus the object is allowed to rotate with respect to the hand that drags it toward the edge. Then, as soon as the object reaches the desired position, the hand detaches from the object and moves to grasp the object from the side. In both strategies, the hand positioning for grasping the object is implemented using a recently proposed functional model for soft hands, the closure signature, whereas the sliding motion on the table is executed by using a hybrid force-velocity controller. We conducted 320 grasping trials with 16 different objects using a soft hand attached to a collaborative robot arm. Experiments showed that the Continuous Slide and Grasp is more suitable for small objects (e.g., a credit card), whereas the Pivot and Re-Grasp performs better with larger objects (e.g., a big book). The gathered data were used to train a classifier that selects the most suitable strategy to use, according to the object size and weight. Implementing ECE strategies with soft hands is a first step toward their use in real-world scenarios, where the environment should be seen more as a help than as a hindrance.


Analysis of Compensatory Movements Using a Supernumerary Robotic Hand for Upper Limb Assistance.

  • Martina Rossero‎ et al.
  • Frontiers in robotics and AI‎
  • 2020‎

Recently, extratheses, aka Supernumerary Robotic Limbs (SRLs), are emerging as a new trend in the field of assistive and rehabilitation devices. We proposed the SoftHand X, a system composed of an anthropomorphic soft hand extrathesis, with a gravity support boom and a control interface for the patient. In preliminary tests, the system exhibited a positive outlook toward assisting impaired people during daily life activities and fighting learned-non-use of the impaired arm. However, similar to many robot-aided therapies, the use of the system may induce side effects that can be detrimental and worsen patients' conditions. One of the most common is the onset of alternative grasping strategies and compensatory movements, which clinicians absolutely need to counter in physical therapy. Before embarking in systematic experimentation with the SoftHand X on patients, it is essential that the system is demonstrated not to lead to an increase of compensation habits. This paper provides a detailed description of the compensatory movements performed by healthy subjects using the SoftHand X. Eleven right-handed healthy subjects were involved within an experimental protocol in which kinematic data of the upper body and EMG signals of the arm were acquired. Each subject executed tasks with and without the robotic system, considering this last situation as reference of optimal behavior. A comparison between two different configurations of the robotic hand was performed to understand if this aspect may affect the compensatory movements. Results demonstrated that the use of the apparatus reduces the range of motion of the wrist, elbow and shoulder, while it increases the range of the trunk and head movements. On the other hand, EMG analysis indicated that muscle activation was very similar among all the conditions. Results obtained suggest that the system may be used as assistive device without causing an over-use of the arm joints, and opens the way to clinical trials with patients.


Designing Ethical Social Robots-A Longitudinal Field Study With Older Adults.

  • Anouk van Maris‎ et al.
  • Frontiers in robotics and AI‎
  • 2020‎

Emotional deception and emotional attachment are regarded as ethical concerns in human-robot interaction. Considering these concerns is essential, particularly as little is known about longitudinal effects of interactions with social robots. We ran a longitudinal user study with older adults in two retirement villages, where people interacted with a robot in a didactic setting for eight sessions over a period of 4 weeks. The robot would show either non-emotive or emotive behavior during these interactions in order to investigate emotional deception. Questionnaires were given to investigate participants' acceptance of the robot, perception of the social interactions with the robot and attachment to the robot. Results show that the robot's behavior did not seem to influence participants' acceptance of the robot, perception of the interaction or attachment to the robot. Time did not appear to influence participants' level of attachment to the robot, which ranged from low to medium. The perceived ease of using the robot significantly increased over time. These findings indicate that a robot showing emotions-and perhaps resulting in users being deceived-in a didactic setting may not by default negatively influence participants' acceptance and perception of the robot, and that older adults may not become distressed if the robot would break or be taken away from them, as attachment to the robot in this didactic setting was not high. However, more research is required as there may be other factors influencing these ethical concerns, and support through other measurements than questionnaires is required to be able to draw conclusions regarding these concerns.


Detection of Foreign Bodies in Soft Foods Employing Tactile Image Sensor.

  • Kazuhiro Shimonomura‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

In the inspection work involving foodstuffs in food factories, there are cases where people not only visually inspect foodstuffs, but must also physically touch foodstuffs with their hands to find foreign or undesirable objects mixed in the product. To contribute to the automation of the inspection process, this paper proposes a method for detecting foreign objects in food based on differences in hardness using a camera-based tactile image sensor. Because the foreign objects to be detected are often small, the tactile sensor requires a high spatial resolution. In addition, inspection work in food factories requires a sufficient inspection speed. The proposed cylindrical tactile image sensor meets these requirements because it can efficiently acquire high-resolution tactile images with a camera mounted inside while rolling the cylindrical sensor surface over the target object. By analyzing the images obtained from the tactile image sensor, we detected the presence of foreign objects and their locations. By using a reflective membrane-type sensor surface with high sensitivity, small and hard foreign bodies of sub-millimeter size mixed in with soft food were successfully detected. The effectiveness of the proposed method was confirmed through experiments to detect shell fragments left on the surface of raw shrimp and bones left in fish fillets.


Online Morphological Adaptation for Tactile Sensing Augmentation.

  • Josie Hughes‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

Sensor morphology and structure has the ability to significantly aid and improve tactile sensing capabilities, through mechanisms such as improved sensitivity or morphological computation. However, different tactile tasks require different morphologies posing a challenge as to how to best design sensors, and also how to enable sensor morphology to be varied. We introduce a jamming filter which, when placed over a tactile sensor, allows the filter to be shaped and molded online, thus varying the sensor structure. We demonstrate how this is beneficial for sensory tasks analyzing how the change in sensor structure varies the information that is gained using the sensor. Moreover, we show that appropriate morphology can significantly influence discrimination, and observe how the selection of an appropriate filter can increase the object classification accuracy when using standard classifiers by up to 28%.


Relationship Between Muscular Activity and Assistance Magnitude for a Myoelectric Model Based Controlled Exosuit.

  • Francesco Missiroli‎ et al.
  • Frontiers in robotics and AI‎
  • 2020‎

The growing field of soft wearable exosuits, is gradually gaining terrain and proposing new complementary solutions in assistive technology, with several advantages in terms of portability, kinematic transparency, ergonomics, and metabolic efficiency. Those are palatable benefits that can be exploited in several applications, ranging from strength and resistance augmentation in industrial scenarios, to assistance or rehabilitation for people with motor impairments. To be effective, however, an exosuit needs to synergistically work with the human and matching specific requirements in terms of both movements kinematics and dynamics: an accurate and timely intention-detection strategy is the paramount aspect which assume a fundamental importance for acceptance and usability of such technology. We previously proposed to tackle this challenge by means of a model-based myoelectric controller, treating the exosuit as an external muscular layer in parallel to the human biomechanics and as such, controlled by the same efferent motor commands of biological muscles. However, previous studies that used classical control methods, demonstrated that the level of device's intervention and effectiveness of task completion are not linearly related: therefore, using a newly implemented EMG-driven controller, we isolated and characterized the relationship between assistance magnitude and muscular benefits, with the goal to find a range of assistance which could make the controller versatile for both dynamic and static tasks. Ten healthy participants performed the experiment resembling functional daily activities living in separate assistance conditions: without the device's active support and with different levels of intervention by the exosuit. Higher assistance levels resulted in larger reductions in the activity of the muscles augmented by the suit actuation and a good performance in motion accuracy, despite involving a decrease of the movement velocities, with respect to the no assistance condition. Moreover, increasing torque magnitude by the exosuit resulted in a significant reduction in the biological torque at the elbow joint and in a progressive effective delay in the onset of muscular fatigue. Thus, contrarily to classical force and proportional myoelectric schemes, the implementation of an opportunely tailored EMG-driven model based controller affords to naturally match user's intention detection and provide an assistance level working symbiotically with the human biomechanics.


Tracking All Members of a Honey Bee Colony Over Their Lifetime Using Learned Models of Correspondence.

  • Franziska Boenisch‎ et al.
  • Frontiers in robotics and AI‎
  • 2018‎

Computational approaches to the analysis of collective behavior in social insects increasingly rely on motion paths as an intermediate data layer from which one can infer individual behaviors or social interactions. Honey bees are a popular model for learning and memory. Previous experience has been shown to affect and modulate future social interactions. So far, no lifetime history observations have been reported for all bees of a colony. In a previous work we introduced a recording setup customized to track up to 4,000 marked bees over several weeks. Due to detection and decoding errors of the bee markers, linking the correct correspondences through time is non-trivial. In this contribution we present an in-depth description of the underlying multi-step algorithm which produces motion paths, and also improves the marker decoding accuracy significantly. The proposed solution employs two classifiers to predict the correspondence of two consecutive detections in the first step, and two tracklets in the second. We automatically tracked ~2,000 marked honey bees over 10 weeks with inexpensive recording hardware using markers without any error correction bits. We found that the proposed two-step tracking reduced incorrect ID decodings from initially ~13% to around 2% post-tracking. Alongside this paper, we publish the first trajectory dataset for all bees in a colony, extracted from ~3 million images covering 3 days. We invite researchers to join the collective scientific effort to investigate this intriguing animal system. All components of our system are open-source.


In the Wild HRI Scenario: Influence of Regulatory Focus Theory.

  • Roxana Agrigoroaie‎ et al.
  • Frontiers in robotics and AI‎
  • 2020‎

Research related to regulatory focus theory has shown that the way in which a message is conveyed can increase the effectiveness of the message. While different research fields have used this theory, in human-robot interaction (HRI), no real attention has been given to this theory. In this paper, we investigate it in an in the wild scenario. More specifically, we are interested in how individuals react when a robot suddenly appears at their office doors. Will they interact with it or will they ignore it? We report the results from our experimental study in which the robot approaches 42 individuals. Twenty-nine of them interacted with the robot, while the others either ignored it or avoided any interaction with it. The robot displayed two types of behavior (i.e., promotion or prevention). Our results show that individuals that interacted with a robot that matched their regulatory focus type interacted with it significantly longer than individuals that did not experience regulatory fit. Other qualitative results are also reported, together with some reactions from the participants.


Behavior adaptation for mobile robots via semantic map compositions of constraint-based controllers.

  • Hao Liang Chen‎ et al.
  • Frontiers in robotics and AI‎
  • 2023‎

Specifying and solving Constraint-based Optimization Problems (COP) has become a mainstream technology for advanced motion control of mobile robots. COP programming still requires expert knowledge to transform specific application context into the right configuration of the COP parameters (i.e., objective functions and constraints). The research contribution of this paper is a methodology to couple the context knowledge of application developers to the robot knowledge of control engineers, which, to our knowledge, has not yet been carried out. The former is offered a selected set of symbolic descriptions of the robots' capabilities (its so-called "behavior semantics") that are translated in control actions via "templates" in a "semantic map"; the latter contains the parameters that cover contextual dependencies in an application and robot vendor-independent way. The translation from semantics to control templates takes place in an "interaction layer" that contains 1) generic knowledge about robot motion capabilities (e.g., depending on the kinematic type of the robots), 2) spatial queries to extract relevant COP parameters from a semantic map (e.g., what is the impact of entering different types of "collision areas"), and 3) generic application knowledge (e.g., how the robots' behavior is impacted by priorities, emergency, safety, and prudence). This particular design of, and interplay between, the application, interaction, and control layers provides a structured, conceptually simple approach to advance the complexity of mobile robot applications. Eventually, industry-wide cooperation between representatives of the application and control communities should result in an interaction layer with different standardized versions of semantic complexity.


How to Model Tendon-Driven Continuum Robots and Benchmark Modelling Performance.

  • Priyanka Rao‎ et al.
  • Frontiers in robotics and AI‎
  • 2020‎

Tendon actuation is one of the most prominent actuation principles for continuum robots. To date, a wide variety of modelling approaches has been derived to describe the deformations of tendon-driven continuum robots. Motivated by the need for a comprehensive overview of existing methodologies, this work summarizes and outlines state-of-the-art modelling approaches. In particular, the most relevant models are classified based on backbone representations and kinematic as well as static assumptions. Numerical case studies are conducted to compare the performance of representative modelling approaches from the current state-of-the-art, considering varying robot parameters and scenarios. The approaches show different performances in terms of accuracy and computation time. Guidelines for the selection of the most suitable approach for given designs of tendon-driven continuum robots and applications are deduced from these results.


Rowing Simulator Modulates Water Density to Foster Motor Learning.

  • Ekin Basalp‎ et al.
  • Frontiers in robotics and AI‎
  • 2019‎

Although robot-assisted training is present in various fields such as sports engineering and rehabilitation, provision of training strategies that optimally support individual motor learning remains as a challenge. Literature has shown that guidance strategies are useful for beginners, while skilled trainees should benefit from challenging conditions. The Challenge Point Theory also supports this in a way that learning is dependent on the available information, which serves as a challenge to the learner. So, learning can be fostered when the optimal amount of information is given according to the trainee's skill. Even though the framework explains the importance of difficulty modulation, there are no practical guidelines for complex dynamic tasks on how to match the difficulty to the trainee's skill progress. Therefore, the goal of this study was to determine the impact on learning of a complex motor task by a modulated task difficulty scheme during the training sessions, without distorting the nature of task. In this 3-day protocol study, we compared two groups of naïve participants for learning a sweep rowing task in a highly sophisticated rowing simulator. During trainings, groups received concurrent visual feedback displaying the requested oar movement. Control group performed the task under constant difficulty in the training sessions. Experimental group's task difficulty was modulated by changing the virtual water density that generated different heaviness of the simulated water-oar interaction, which yielded practice variability. Learning was assessed in terms of spatial and velocity magnitude errors and the variability for these metrics. Results of final day tests revealed that both groups reduced their error and variability for the chosen metrics. Notably, in addition to the provision of a very well established visual feedback and knowledge of results, experimental group's variable training protocol with modulated difficulty showed a potential to be advantageous for the spatial consistency and velocity accuracy. The outcomes of training and test runs indicate that we could successfully alter the performance of the trainees by changing the density value of the virtual water. Therefore, a follow-up study is necessary to investigate how to match different density values to the skill and performance improvement of the participants.


Can the Shape of a Planar Pathway Be Estimated Using Proximal Forces of Inserting a Flexible Shaft?

  • Jiajun Liu‎ et al.
  • Frontiers in robotics and AI‎
  • 2021‎

The shape information of flexible endoscopes or other continuum structures, e.g., intro-vascular catheters, is needed for accurate navigation, motion compensation, and haptic feedback in robotic surgical systems. Existing methods rely on optical fiber sensors, electromagnetic sensors, or expensive medical imaging modalities such as X-ray fluoroscopy, magnetic resonance imaging, and ultrasound to obtain the shape information of these flexible medical devices. Here, we propose to estimate the shape/curvature of a continuum structure by measuring the force required to insert a flexible shaft into the internal channel/pathway of the continuum. We found that there is a consistent correlation between the measured insertion force and curvature of the planar continuum pathway. A testbed was built to insert a flexible shaft into a planar continuum pathway with adjustable shapes. The insertion forces, insertion displacement, and the shapes of the pathway were recorded. A neural network model was developed to model this correlation based on the training data collected on the testbed. The trained model, tested on the testing data, can accurately estimate the curvature magnitudes and the accumulated bending angles of the pathway simply based on the measured insertion force at the proximal end of the shaft. The approach may be used to estimate the curvature magnitudes and accumulated bending angles of flexible endoscopic surgical robots or catheters for accurate motion compensation, haptic force feedback, localization, or navigation. The advantage of this approach is that the employed proximal force can be easily obtained outside the pathway or continuum structure without any embedded sensor in the continuum structure. Future work is needed to further investigate the correlation between insertion forces and the pathway and enhance the capability of the model in estimating more complex shapes, e.g., spatial shapes with multiple bends.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: