Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 1 showing 1 ~ 7 papers out of 7 papers

Save Muscle Information-Unfiltered EEG Signal Helps Distinguish Sleep Stages.

  • Gi-Ren Liu‎ et al.
  • Sensors (Basel, Switzerland)‎
  • 2020‎

Based on the well-established biopotential theory, we hypothesize that the high frequency spectral information, like that higher than 100Hz, of the EEG signal recorded in the off-the-shelf EEG sensor contains muscle tone information. We show that an existing automatic sleep stage annotation algorithm can be improved by taking this information into account. This result suggests that if possible, we should sample the EEG signal with a high sampling rate, and preserve as much spectral information as possible.


The Virtual Sleep Lab-A Novel Method for Accurate Four-Class Sleep Staging Using Heart-Rate Variability from Low-Cost Wearables.

  • Pavlos Topalidis‎ et al.
  • Sensors (Basel, Switzerland)‎
  • 2023‎

Sleep staging based on polysomnography (PSG) performed by human experts is the de facto "gold standard" for the objective measurement of sleep. PSG and manual sleep staging is, however, personnel-intensive and time-consuming and it is thus impractical to monitor a person's sleep architecture over extended periods. Here, we present a novel, low-cost, automatized, deep learning alternative to PSG sleep staging that provides a reliable epoch-by-epoch four-class sleep staging approach (Wake, Light [N1 + N2], Deep, REM) based solely on inter-beat-interval (IBI) data. Having trained a multi-resolution convolutional neural network (MCNN) on the IBIs of 8898 full-night manually sleep-staged recordings, we tested the MCNN on sleep classification using the IBIs of two low-cost (


A Systematic Review of Sensing Technologies for Wearable Sleep Staging.

  • Syed Anas Imtiaz‎
  • Sensors (Basel, Switzerland)‎
  • 2021‎

Designing wearable systems for sleep detection and staging is extremely challenging due to the numerous constraints associated with sensing, usability, accuracy, and regulatory requirements. Several researchers have explored the use of signals from a subset of sensors that are used in polysomnography (PSG), whereas others have demonstrated the feasibility of using alternative sensing modalities. In this paper, a systematic review of the different sensing modalities that have been used for wearable sleep staging is presented. Based on a review of 90 papers, 13 different sensing modalities are identified. Each sensing modality is explored to identify signals that can be obtained from it, the sleep stages that can be reliably identified, the classification accuracy of systems and methods using the sensing modality, as well as the usability constraints of the sensor in a wearable system. It concludes that the two most common sensing modalities in use are those based on electroencephalography (EEG) and photoplethysmography (PPG). EEG-based systems are the most accurate, with EEG being the only sensing modality capable of identifying all the stages of sleep. PPG-based systems are much simpler to use and better suited for wearable monitoring but are unable to identify all the sleep stages.


Validation of Visually Identified Muscle Potentials during Human Sleep Using High Frequency/Low Frequency Spectral Power Ratios.

  • Mo H Modarres‎ et al.
  • Sensors (Basel, Switzerland)‎
  • 2021‎

Surface electromyography (EMG), typically recorded from muscle groups such as the mentalis (chin/mentum) and anterior tibialis (lower leg/crus), is often performed in human subjects undergoing overnight polysomnography. Such signals have great importance, not only in aiding in the definitions of normal sleep stages, but also in defining certain disease states with abnormal EMG activity during rapid eye movement (REM) sleep, e.g., REM sleep behavior disorder and parkinsonism. Gold standard approaches to evaluation of such EMG signals in the clinical realm are typically qualitative, and therefore burdensome and subject to individual interpretation. We originally developed a digitized, signal processing method using the ratio of high frequency to low frequency spectral power and validated this method against expert human scorer interpretation of transient muscle activation of the EMG signal. Herein, we further refine and validate our initial approach, applying this to EMG activity across 1,618,842 s of polysomnography recorded REM sleep acquired from 461 human participants. These data demonstrate a significant association between visual interpretation and the spectrally processed signals, indicating a highly accurate approach to detecting and quantifying abnormally high levels of EMG activity during REM sleep. Accordingly, our automated approach to EMG quantification during human sleep recording is practical, feasible, and may provide a much-needed clinical tool for the screening of REM sleep behavior disorder and parkinsonism.


Decoding Brain Responses to Names and Voices across Different Vigilance States.

  • Tomasz Wielek‎ et al.
  • Sensors (Basel, Switzerland)‎
  • 2021‎

Past research has demonstrated differential responses of the brain during sleep in response especially to variations in paralinguistic properties of auditory stimuli, suggesting they can still be processed "offline". However, the nature of the underlying mechanisms remains unclear. Here, we therefore used multivariate pattern analyses to directly test the similarities in brain activity among different sleep stages (non-rapid eye movement stages N1-N3, as well as rapid-eye movement sleep REM, and wake). We varied stimulus salience by manipulating subjective (own vs. unfamiliar name) and paralinguistic (familiar vs. unfamiliar voice) salience in 16 healthy sleepers during an 8-h sleep opportunity. Paralinguistic salience (i.e., familiar vs. unfamiliar voice) was reliably decoded from EEG response patterns during both N2 and N3 sleep. Importantly, the classifiers trained on N2 and N3 data generalized to N3 and N2, respectively, suggesting similar processing mode in these states. Moreover, projecting the classifiers' weights using a forward model revealed similar fronto-central topographical patterns in NREM stages N2 and N3. Finally, we found no generalization from wake to any sleep stage (and vice versa) suggesting that "processing modes" or the overall processing architecture with respect to relevant oscillations and/or networks substantially change from wake to sleep. However, the results point to a single and rather uniform NREM-specific mechanism that is involved in (auditory) salience detection during sleep.


Cross-Domain Transfer of EEG to EEG or ECG Learning for CNN Classification Models.

  • Chia-Yen Yang‎ et al.
  • Sensors (Basel, Switzerland)‎
  • 2023‎

Electroencephalography (EEG) is often used to evaluate several types of neurological brain disorders because of its noninvasive and high temporal resolution. In contrast to electrocardiography (ECG), EEG can be uncomfortable and inconvenient for patients. Moreover, deep-learning techniques require a large dataset and a long time for training from scratch. Therefore, in this study, EEG-EEG or EEG-ECG transfer learning strategies were applied to explore their effectiveness for the training of simple cross-domain convolutional neural networks (CNNs) used in seizure prediction and sleep staging systems, respectively. The seizure model detected interictal and preictal periods, whereas the sleep staging model classified signals into five stages. The patient-specific seizure prediction model with six frozen layers achieved 100% accuracy for seven out of nine patients and required only 40 s of training time for personalization. Moreover, the cross-signal transfer learning EEG-ECG model for sleep staging achieved an accuracy approximately 2.5% higher than that of the ECG model; additionally, the training time was reduced by >50%. In summary, transfer learning from an EEG model to produce personalized models for a more convenient signal can both reduce the training time and increase the accuracy; moreover, challenges such as data insufficiency, variability, and inefficiency can be effectively overcome.


Automated Method for Discrimination of Arrhythmias Using Time, Frequency, and Nonlinear Features of Electrocardiogram Signals.

  • Shirin Hajeb-Mohammadalipour‎ et al.
  • Sensors (Basel, Switzerland)‎
  • 2018‎

We developed an automated approach to differentiate between different types of arrhythmic episodes in electrocardiogram (ECG) signals, because, in real-life scenarios, a software application does not know in advance the type of arrhythmia a patient experiences. Our approach has four main stages: (1) Classification of ventricular fibrillation (VF) versus non-VF segments—including atrial fibrillation (AF), ventricular tachycardia (VT), normal sinus rhythm (NSR), and sinus arrhythmias, such as bigeminy, trigeminy, quadrigeminy, couplet, triplet—using four image-based phase plot features, one frequency domain feature, and the Shannon entropy index. (2) Classification of AF versus non-AF segments. (3) Premature ventricular contraction (PVC) detection on every non-AF segment, using a time domain feature, a frequency domain feature, and two features that characterize the nonlinearity of the data. (4) Determination of the PVC patterns, if present, to categorize distinct types of sinus arrhythmias and NSR. We used the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database, Creighton University’s VT arrhythmia database, the MIT-BIH atrial fibrillation database, and the MIT-BIH malignant ventricular arrhythmia database to test our algorithm. Binary decision tree (BDT) and support vector machine (SVM) classifiers were used in both stage 1 and stage 3. We also compared our proposed algorithm’s performance to other published algorithms. Our VF detection algorithm was accurate, as in balanced datasets (and unbalanced, in parentheses) it provided an accuracy of 95.1% (97.1%), sensitivity of 94.5% (91.1%), and specificity of 94.2% (98.2%). The AF detection was accurate, as the sensitivity and specificity in balanced datasets (and unbalanced, in parentheses) were found to be 97.8% (98.6%) and 97.21% (97.1%), respectively. Our PVC detection algorithm was also robust, as the accuracy, sensitivity, and specificity were found to be 99% (98.1%), 98.0% (96.2%), and 98.4% (99.4%), respectively, for balanced and (unbalanced) datasets.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: