Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 1 showing 1 ~ 9 papers out of 9 papers

Surface Electromyography-Controlled Automobile Steering Assistance.

  • Edric John Cruz Nacpil‎ et al.
  • Sensors (Basel, Switzerland)‎
  • 2020‎

Disabilities of the upper limb, such as hemiplegia or upper limb amputation, can limit automobile drivers to steering with one healthy arm. For the benefit of these drivers, recent studies have developed prototype interfaces that realized surface electromyography (sEMG)-controlled steering assistance with path-following accuracy that has been validated with driving simulations. In contrast, the current study expands the application of sEMG-controlled steering assistance by validating the Myo armband, a mass-produced sEMG-based interface, with respect to the path-following accuracy of a commercially available automobile. It was hypothesized that one-handed remote steering with the Myo armband would be comparable or superior to the conventional operation of the automobile steering wheel. Although results of low-speed field testing indicate that the Myo armband had lower path-following accuracy than the steering wheel during a 90° turn and wide U-turn at twice the minimum turning radius, the Myo armband had superior path-following accuracy for a narrow U-turn at the minimum turning radius and a 45° turn. Given its overall comparability to the steering wheel, the Myo armband could be feasibly applied in future automobile studies.


Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor.

  • Rizwan Ali Naqvi‎ et al.
  • Sensors (Basel, Switzerland)‎
  • 2018‎

A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver's point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.


Driving Stress Detection Using Multimodal Convolutional Neural Networks with Nonlinear Representation of Short-Term Physiological Signals.

  • Jaewon Lee‎ et al.
  • Sensors (Basel, Switzerland)‎
  • 2021‎

Mental stress can lead to traffic accidents by reducing a driver's concentration or increasing fatigue while driving. In recent years, demand for methods to detect drivers' stress in advance to prevent dangerous situations increased. Thus, we propose a novel method for detecting driving stress using nonlinear representations of short-term (30 s or less) physiological signals for multimodal convolutional neural networks (CNNs). Specifically, from hand/foot galvanic skin response (HGSR, FGSR) and heart rate (HR) short-term input signals, first, we generate corresponding two-dimensional nonlinear representations called continuous recurrence plots (Cont-RPs). Second, from the Cont-RPs, we use multimodal CNNs to automatically extract FGSR, HGSR, and HR signal representative features that can effectively differentiate between stressed and relaxed states. Lastly, we concatenate the three extracted features into one integrated representation vector, which we feed to a fully connected layer to perform classification. For the evaluation, we use a public stress dataset collected from actual driving environments. Experimental results show that the proposed method demonstrates superior performance for 30-s signals, with an overall accuracy of 95.67%, an approximately 2.5-3% improvement compared with that of previous works. Additionally, for 10-s signals, the proposed method achieves 92.33% classification accuracy, which is similar to or better than the performance of other methods using long-term signals (over 100 s).


Human-Like Obstacle Avoidance Trajectory Planning and Tracking Model for Autonomous Vehicles That Considers the River's Operation Characteristics.

  • Qinyu Sun‎ et al.
  • Sensors (Basel, Switzerland)‎
  • 2020‎

Developing a human-like autonomous driving system has gained increasing amounts of attention from both technology companies and academic institutions, as it can improve the interpretability and acceptance of the autonomous system. Planning a safe and human-like obstacle avoidance trajectory is one of the critical issues for the development of autonomous vehicles (AVs). However, when designing automatic obstacle avoidance systems, few studies have focused on the obstacle avoidance characteristics of human drivers. This paper aims to develop an obstacle avoidance trajectory planning and trajectory tracking model for AVs that is consistent with the characteristics of human drivers' obstacle avoidance trajectory. Therefore, a modified artificial potential field (APF) model was established by adding a road boundary repulsive potential field and ameliorating the obstacle repulsive potential field based on the traditional APF model. The model predictive control (MPC) algorithm was combined with the APF model to make the planning model satisfy the kinematic constraints of the vehicle. In addition, a human driver's obstacle avoidance experiment was implemented based on a six-degree-of-freedom driving simulator equipped with multiple sensors to obtain the drivers' operation characteristics and provide a basis for parameter confirmation of the planning model. Then, a linear time-varying MPC algorithm was employed to construct the trajectory tracking model. Finally, a co-simulation model based on CarSim/Simulink was established for off-line simulation testing, and the results indicated that the proposed trajectory planning controller and the trajectory tracking controller were more human-like under the premise of ensuring the safety and comfort of the obstacle avoidance operation, providing a foundation for the development of AVs.


A Hybrid Approach for Turning Intention Prediction Based on Time Series Forecasting and Deep Learning.

  • Hailun Zhang‎ et al.
  • Sensors (Basel, Switzerland)‎
  • 2020‎

At an intersection with complex traffic flow, the early detection of the intention of drivers in surrounding vehicles can enable advanced driver assistance systems (ADAS) to warn the driver in advance or prompt its subsystems to assess the risk and intervene early. Although different drivers show various driving characteristics, the kinematic parameters of human-driven vehicles can be used as a predictor for predicting the driver's intention within a short time. In this paper, we propose a new hybrid approach for vehicle behavior recognition at intersections based on time series prediction and deep learning networks. First, the lateral position, longitudinal position, speed, and acceleration of the vehicle are predicted using the online autoregressive integrated moving average (ARIMA) algorithm. Next, a variant of the long short-term memory network, called the bidirectional long short-term memory (Bi-LSTM) network, is used to detect the vehicle's turning behavior using the predicted parameters, as well as the derived parameters, i.e., the lateral velocity, lateral acceleration, and heading angle. The validity of the proposed method is verified at real intersections using the public driving data of the next generation simulation (NGSIM) project. The results of the turning behavior detection show that the proposed hybrid approach exhibits significant improvement over a conventional algorithm; the average recognition rates are 94.2% and 93.5% at 2 s and 1 s, respectively, before initiating the turning maneuver.


Inferring the Driver's Lane Change Intention through LiDAR-Based Environment Analysis Using Convolutional Neural Networks.

  • Alberto Díaz-Álvarez‎ et al.
  • Sensors (Basel, Switzerland)‎
  • 2021‎

Most of the tactic manoeuvres during driving require a certain understanding of the surrounding environment from which to devise our future behaviour. In this paper, a Convolutional Neural Network (CNN) approach is used to model the lane change behaviour to identify when a driver is going to perform this manoeuvre. To that end, a slightly modified CNN architecture adapted to both spatial (i.e., surrounding environment) and non-spatial (i.e., rest of variables such as relative speed to the front vehicle) input variables. Anticipating a driver's lane change intention means it is possible to use this information as a new source of data in wide range of different scenarios. One example of such scenarios might be the decision making process support for human drivers through Advanced Driver Assistance Systems (ADAS) fed with the data of the surrounding cars in an inter-vehicular network. Another example might even be its use in autonomous vehicles by using the data of a specific driver profile to make automated driving more human-like. Several CNN architectures have been tested on a simulation environment to assess their performance. Results show that the selected architecture provides a higher degree of accuracy than random guessing (i.e., assigning a class randomly for each observation in the data set), and it can capture subtle differences in behaviour between different driving profiles.


Electric Bus Pedal Misapplication Detection Based on Phase Space Reconstruction Method.

  • Aihong Lyu‎ et al.
  • Sensors (Basel, Switzerland)‎
  • 2023‎

Due to the environmental protection of electric buses, they are gradually replacing traditional fuel buses. Several previous studies have found that accidents related to electric vehicles are linked to Unintended Acceleration (UA), which is mostly caused by the driver pressing the wrong pedal. Therefore, this study proposed a Model for Detecting Pedal Misapplication in Electric Buses (MDPMEB). In this work, natural driving experiments for urban electric buses and pedal misapplication simulation experiments were carried out in a closed field; furthermore, a phase space reconstruction method was introduced, based on chaos theory, to map sequence data to a high-dimensional space in order to produce normal braking and pedal misapplication image datasets. Based on these findings, a modified Swin Transformer network was built. To prevent the model from overfitting when considering small sample data and to improve the generalization ability of the model, it was pre-trained using a publicly available dataset; moreover, the weights of the prior knowledge model were loaded into the model for training. The proposed model was also compared to machine learning and Convolutional Neural Networks (CNN) algorithms. This study showed that this model was able to detect normal braking and pedal misapplication behavior accurately and quickly, and the accuracy rate on the test dataset is 97.58%, which is 9.17% and 4.5% higher than the machine learning algorithm and CNN algorithm, respectively.


A Deep Reinforcement Learning Strategy for Surrounding Vehicles-Based Lane-Keeping Control.

  • Jihun Kim‎ et al.
  • Sensors (Basel, Switzerland)‎
  • 2023‎

As autonomous vehicles (AVs) are advancing to higher levels of autonomy and performance, the associated technologies are becoming increasingly diverse. Lane-keeping systems (LKS), corresponding to a key functionality of AVs, considerably enhance driver convenience. With drivers increasingly relying on autonomous driving technologies, the importance of safety features, such as fail-safe mechanisms in the event of sensor failures, has gained prominence. Therefore, this paper proposes a reinforcement learning (RL) control method for lane-keeping, which uses surrounding object information derived through LiDAR sensors instead of camera sensors for LKS. This approach uses surrounding vehicle and object information as observations for the RL framework to maintain the vehicle's current lane. The learning environment is established by integrating simulation tools, such as IPG CarMaker, which incorporates vehicle dynamics, and MATLAB Simulink for data analysis and RL model creation. To further validate the applicability of the LiDAR sensor data in real-world settings, Gaussian noise is introduced in the virtual simulation environment to mimic sensor noise in actual operational conditions.


Design and Evaluation of a Surface Electromyography-Controlled Steering Assistance Interface.

  • Edric John Cruz Nacpil‎ et al.
  • Sensors (Basel, Switzerland)‎
  • 2019‎

Millions of drivers could experience shoulder muscle overload when rapidly rotating steering wheels and reduced steering ability at increased steering wheel angles. In order to address these issues for drivers with disability, surface electromyography (sEMG) sensors measuring biceps brachii muscle activity were incorporated into a steering assistance system for remote steering wheel rotation. The path-following accuracy of the sEMG interface with respect to a game steering wheel was evaluated through driving simulator trials. Human participants executed U-turns with differing radii of curvature. For a radius of curvature equal to the minimum vehicle turning radius of 3.6 m, the sEMG interface had significantly greater accuracy than the game steering wheel, with intertrial median lateral errors of 0.5 m and 1.2 m, respectively. For a U-turn with a radius of 7.2 m, the sEMG interface and game steering wheel were comparable in accuracy, with respective intertrial median lateral errors of 1.6 m and 1.4 m. The findings of this study could be utilized to realize accurate sEMG-controlled automobile steering for persons with disability.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: