Searching across hundreds of databases

Our searching services are busy right now. Your search will reload in five seconds.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

X
Forgot Password

If you have forgotten your password you can enter your email here and get a temporary password sent to your email.

This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.

Search

Type in a keyword to search

On page 1 showing 1 ~ 20 papers out of 2,424 papers

Intraoperative video recording in otolaryngology for surgical education: evolution and considerations.

  • Hannah L Brennan‎ et al.
  • Journal of otolaryngology - head & neck surgery = Le Journal d'oto-rhino-laryngologie et de chirurgie cervico-faciale‎
  • 2023‎

Otolaryngology is a surgical speciality well suited for the application of intraoperative video recording as an educational tool considering the number procedures within the speciality that utilize digital technology. Intraoperative recording has been utilized in endoscopic surgeries and in evaluating technique in mastoidectomy, myringotomy and grommet insertion. The impact of intra-operative video recording in otolaryngology education is vast in creating access to surgical videos for preparation outside the operating room to individualized coaching and assessment. The purpose of this project is to highlight the role of intraoperative video recording in otolaryngology training and elucidate the challenges and considerations associated with implementation.


dFRAME: A Video Recording-Based Analytical Method for Studying Feeding Rhythm in Drosophila.

  • Mengxia Niu‎ et al.
  • Frontiers in genetics‎
  • 2021‎

Animals, from insects to humans, exhibit obvious diurnal rhythmicity of feeding behavior. Serving as a genetic animal model, Drosophila has been reported to display feeding rhythms; however, related investigations are limited due to the lack of suitable and practical methods. Here, we present a video recording-based analytical method, namely, Drosophila Feeding Rhythm Analysis Method (dFRAME). Using our newly developed computer program, FlyFeeding, we extracted the movement track of individual flies and characterized their food-approaching behavior. To distinguish feeding and no-feeding events, we utilized high-magnification video recording to optimize our method by setting cut-off thresholds to eliminate the interference of no-feeding events. Furthermore, we verified that this method is applicable to both female and male flies and for all periods of the day. Using this method, we analyzed long-term feeding status of wild-type and period mutant flies. The results recaptured previously reported feeding rhythms and revealed detailed profiles of feeding patterns in these flies under either light/dark cycles or constant dark environments. Together, our dFRAME method enables a long-term, stable, reliable, and subtle analysis of feeding behavior in Drosophila. High-throughput studies in this powerful genetic animal model will gain great insights into the molecular and neural mechanisms of feeding rhythms.


Gemvid, an open source, modular, automated activity recording system for rats using digital video.

  • Jean-Etienne Poirrier‎ et al.
  • Journal of circadian rhythms‎
  • 2006‎

Measurement of locomotor activity is a valuable tool for analysing factors influencing behaviour and for investigating brain function. Several methods have been described in the literature for measuring the amount of animal movement but most are flawed or expensive. Here, we describe an open source, modular, low-cost, user-friendly, highly sensitive, non-invasive system that records all the movements of a rat in its cage.


Contactless facial video recording with deep learning models for the detection of atrial fibrillation.

  • Yu Sun‎ et al.
  • Scientific reports‎
  • 2022‎

Atrial fibrillation (AF) is often asymptomatic and paroxysmal. Screening and monitoring are needed especially for people at high risk. This study sought to use camera-based remote photoplethysmography (rPPG) with a deep convolutional neural network (DCNN) learning model for AF detection. All participants were classified into groups of AF, normal sinus rhythm (NSR) and other abnormality based on 12-lead ECG. They then underwent facial video recording for 10 min with rPPG signals extracted and segmented into 30-s clips as inputs of the training of DCNN models. Using voting algorithm, the participant would be predicted as AF if > 50% of their rPPG segments were determined as AF rhythm by the model. Of the 453 participants (mean age, 69.3 ± 13.0 years, women, 46%), a total of 7320 segments (1969 AF, 1604 NSR & 3747others) were analyzed by DCNN models. The accuracy rate of rPPG with deep learning model for discriminating AF from NSR and other abnormalities was 90.0% and 97.1% in 30-s and 10-min recording, respectively. This contactless, camera-based rPPG technique with a deep-learning model achieved significantly high accuracy to discriminate AF from non-AF and may enable a feasible way for a large-scale screening or monitoring in the future.


Assessment of temporal variations in adherence to NRP using video recording in the delivery room.

  • Amy J Sloane‎ et al.
  • Resuscitation plus‎
  • 2021‎

Video recording and video evaluation tools have been successfully used to evaluate neonatal resuscitation performance. The objective of our study was to evaluate differences in Neonatal Resuscitation Program (NRP) adherence at time of birth between three temporal resuscitative periods using scored video recordings.


Transforming obstetric ultrasound into data science using eye tracking, voice recording, transducer motion and ultrasound video.

  • Lior Drukker‎ et al.
  • Scientific reports‎
  • 2021‎

Ultrasound is the primary modality for obstetric imaging and is highly sonographer dependent. Long training period, insufficient recruitment and poor retention of sonographers are among the global challenges in the expansion of ultrasound use. For the past several decades, technical advancements in clinical obstetric ultrasound scanning have largely concerned improving image quality and processing speed. By contrast, sonographers have been acquiring ultrasound images in a similar fashion for several decades. The PULSE (Perception Ultrasound by Learning Sonographer Experience) project is an interdisciplinary multi-modal imaging study aiming to offer clinical sonography insights and transform the process of obstetric ultrasound acquisition and image analysis by applying deep learning to large-scale multi-modal clinical data. A key novelty of the study is that we record full-length ultrasound video with concurrent tracking of the sonographer's eyes, voice and the transducer while performing routine obstetric scans on pregnant women. We provide a detailed description of the novel acquisition system and illustrate how our data can be used to describe clinical ultrasound. Being able to measure different sonographer actions or model tasks will lead to a better understanding of several topics including how to effectively train new sonographers, monitor the learning progress, and enhance the scanning workflow of experts.


High quality, high throughput, and low-cost simultaneous video recording of 60 animals in operant chambers using PiRATeMC.

  • Jarryd Ramborger‎ et al.
  • bioRxiv : the preprint server for biology‎
  • 2023‎

The development of Raspberry Pi-based recording devices for video analyses of drug self-administration studies has shown to be promising in terms of affordability, customizability, and capacity to extract in-depth behavioral patterns. Yet, most video recording systems are limited to a few cameras making them incompatible with large-scale studies.


EPG combined with micro-CT and video recording reveals new insights on the feeding behavior of Philaenus spumarius.

  • Daniele Cornara‎ et al.
  • PloS one‎
  • 2018‎

The meadow spittlebug Philaenus spumarius plays a key role in the transmission of the bacterium Xylella fastidiosa to olive in Apulia (South Italy). Currently, available data on P. spumarius feeding behavior is limited, and a real-time observation of the different steps involved in stylet insertion, exploratory probes, and ingestion, has never been carried out. Therefore, we performed an EPG-assisted characterization of P. spumarius female feeding behavior on olive, in order to detect and analyze the main EPG waveforms describing their amplitude, frequency, voltage level, and electrical origin of the traces during stylet penetration in plant tissues. Thereafter, each of the main waveforms was correlated with specific biological activities, through video recording and analysis of excretion by adults and excretion/secretion by nymphs. Furthermore, the specific stylet tips position within the plant tissues during each of the waveforms observed was assessed by microcomputer tomography (micro-CT). Additional EPG-recordings were carried out with males of P. spumarius on olive, in order to assess possible sex-related differences. P. spumarius feeding behavior can be described by five main distinct waveforms: C (pathway), Xc (xylem contact/pre-ingestion), Xi (xylem sap ingestion), R (resting), N (interruption within xylem phase). Compared to males, females require shorter time to begin the first probe, and their Xi phase is significantly longer. Furthermore, considering the single waveform events, males on olive exhibit longer np and R compared to females.


Effect of a Neonatal Resuscitation Course on Healthcare Providers' Performances Assessed by Video Recording in a Low-Resource Setting.

  • Daniele Trevisanuto‎ et al.
  • PloS one‎
  • 2015‎

We assessed the effect of an adapted neonatal resuscitation program (NRP) course on healthcare providers' performances in a low-resource setting through the use of video recording.


Contactless recording of sleep apnea and periodic leg movements by nocturnal 3-D-video and subsequent visual perceptive computing.

  • Christian Veauthier‎ et al.
  • Scientific reports‎
  • 2019‎

Contactless measurements during the night by a 3-D-camera are less time-consuming in comparison to polysomnography because they do not require sophisticated wiring. However, it is not clear what might be the diagnostic benefit and accuracy of this technology. We investigated 59 persons simultaneously by polysomnography and 3-D-camera and visual perceptive computing (19 patients with restless legs syndrome (RLS), 21 patients with obstructive sleep apnea (OSA), and 19 healthy volunteers). There was a significant correlation between the apnea hypopnea index (AHI) measured by polysomnography and respiratory events measured with the 3-D-camera in OSA patients (r = 0.823; p < 0.001). The receiver operating characteristic curve yielded a sensitivity of 90% for OSA with a specificity of 71.4%. In RLS patients 72.8% of leg movements confirmed by polysomnography could be detected by 3-D-video and a significant moderate correlation was found between PLM measured by polysomnography and by the 3-D-camera (RLS: r = 0.654; p = 0.004). In total, 95.4% of the sleep epochs were correctly classified by the machine learning approach, but only 32.5% of awake epochs. Further studies should investigate, if this technique might be an alternative to home sleep testing in persons with an increased pre-test probability for OSA.


Video Recording Patients for Direct Care Purposes: Systematic Review and Narrative Synthesis of International Empirical Studies and UK Professional Guidance.

  • Rachael Lear‎ et al.
  • Journal of medical Internet research‎
  • 2023‎

Video recordings of patients may offer advantages to supplement patient assessment and clinical decision-making. However, little is known about the practice of video recording patients for direct care purposes.


Video recording as an objective assessment tool of health worker performance in neonatal resuscitation at a district hospital in Pemba, Tanzania: a feasibility study.

  • Charlotte Carina Holm-Hansen‎ et al.
  • BMJ open‎
  • 2022‎

To assess the feasibility of using video recordings of neonatal resuscitation (NR) to evaluate the quality of care in a low-resource district hospital.


Telemetric recording of neuronal activity.

  • Uwe Jürgens‎ et al.
  • Methods (San Diego, Calif.)‎
  • 2006‎

A telemetric system is described which allows the wireless registration of extracellular neuronal activity and vocalization-associated skull vibrations in freely moving, socially living squirrel monkeys (Saimiri sciureus). The system consists of a carrier platform with numerous guiding tubes implanted on the skull. Custom-made microdrives are mounted on the platform, allowing the exploration of two electrode tracks at the same time. Commercially available quartz-insulated platinum-tungsten microelectrodes are used. The electrodes can be moved over a distance of 8-10 mm by turning a screw on the microdrive. Vocalization-associated skull vibrations are recorded with a piezo-ceramic element. Skull vibration signal and the signals from the two microelectrodes are fed into separate transmitters having different carrier frequencies. The signals are picked up by an antenna in the animal cage and are sent to three receivers in the central laboratory. Here, the signals are transferred via an analog/digital interface to a personal computer for data analysis and to a video recorder for long-term storage. The total weight of the head mount including carrier platform, microdrive, electrodes, skull vibration sensor, three transmitters, and protection cap is 32 g. The transmitters are powered with two rechargeable lithium batteries, allowing about 8 h of continuous recording. Reliable signal transmission is obtained over a distance of about 2 m. Recording stability allows to follow the activity of specific neurons up to several hours, with no movement artefacts during locomotion.


Home Video Telemetry vs inpatient telemetry: A comparative study looking at video quality.

  • Sutapa Biswas‎ et al.
  • Clinical neurophysiology practice‎
  • 2016‎

To compare the quality of home video recording with inpatient telemetry (IPT) to evaluate our current Home Video Telemetry (HVT) practice.


Lineage recording in human cerebral organoids.

  • Zhisong He‎ et al.
  • Nature methods‎
  • 2022‎

Induced pluripotent stem cell (iPSC)-derived organoids provide models to study human organ development. Single-cell transcriptomics enable highly resolved descriptions of cell states within these systems; however, approaches are needed to directly measure lineage relationships. Here we establish iTracer, a lineage recorder that combines reporter barcodes with inducible CRISPR-Cas9 scarring and is compatible with single-cell and spatial transcriptomics. We apply iTracer to explore clonality and lineage dynamics during cerebral organoid development and identify a time window of fate restriction as well as variation in neurogenic dynamics between progenitor neuron families. We also establish long-term four-dimensional light-sheet microscopy for spatial lineage recording in cerebral organoids and confirm regional clonality in the developing neuroepithelium. We incorporate gene perturbation (iTracer-perturb) and assess the effect of mosaic TSC2 mutations on cerebral organoid development. Our data shed light on how lineages and fates are established during cerebral organoid formation. More broadly, our techniques can be adapted in any iPSC-derived culture system to dissect lineage alterations during normal or perturbed development.


Recording electrical activity from the brain of behaving octopus.

  • Tamar Gutnick‎ et al.
  • Current biology : CB‎
  • 2023‎

Octopuses, which are among the most intelligent invertebrates,1,2,3,4 have no skeleton and eight flexible arms whose sensory and motor activities are at once autonomous and coordinated by a complex central nervous system.5,6,7,8 The octopus brain contains a very large number of neurons, organized into numerous distinct lobes, the functions of which have been proposed based largely on the results of lesioning experiments.9,10,11,12,13 In other species, linking brain activity to behavior is done by implanting electrodes and directly correlating electrical activity with observed animal behavior. However, because the octopus lacks any hard structure to which recording equipment can be anchored, and because it uses its eight flexible arms to remove any foreign object attached to the outside of its body, in vivo recording of electrical activity from untethered, behaving octopuses has thus far not been possible. Here, we describe a novel technique for inserting a portable data logger into the octopus and implanting electrodes into the vertical lobe system, such that brain activity can be recorded for up to 12 h from unanesthetized, untethered octopuses and can be synchronized with simultaneous video recordings of behavior. In the brain activity, we identified several distinct patterns that appeared consistently in all animals. While some resemble activity patterns in mammalian neural tissue, others, such as episodes of 2 Hz, large amplitude oscillations, have not been reported. By providing an experimental platform for recording brain activity in behaving octopuses, our study is a critical step toward understanding how the brain controls behavior in these remarkable animals.


Molecular Time Capsules Enable Transcriptomic Recording in Living Cells.

  • Mirae Parker‎ et al.
  • bioRxiv : the preprint server for biology‎
  • 2023‎

Live-cell transcriptomic recording can help reveal hidden cellular states that precede phenotypic transformation. Here we demonstrate the use of protein-based encapsulation for preserving samples of cytoplasmic RNAs inside living cells. These molecular time capsules (MTCs) can be induced to create time-stamped transcriptome snapshots, preserve RNAs after cellular transitions, and enable retrospective investigations of gene expression programs that drive distinct developmental trajectories. MTCs also open the possibility to uncover transcriptomes in difficult-to-reach conditions.


Live-seq enables temporal transcriptomic recording of single cells.

  • Wanze Chen‎ et al.
  • Nature‎
  • 2022‎

Single-cell transcriptomics (scRNA-seq) has greatly advanced our ability to characterize cellular heterogeneity1. However, scRNA-seq requires lysing cells, which impedes further molecular or functional analyses on the same cells. Here, we established Live-seq, a single-cell transcriptome profiling approach that preserves cell viability during RNA extraction using fluidic force microscopy2,3, thus allowing to couple a cell's ground-state transcriptome to its downstream molecular or phenotypic behaviour. To benchmark Live-seq, we used cell growth, functional responses and whole-cell transcriptome read-outs to demonstrate that Live-seq can accurately stratify diverse cell types and states without inducing major cellular perturbations. As a proof of concept, we show that Live-seq can be used to directly map a cell's trajectory by sequentially profiling the transcriptomes of individual macrophages before and after lipopolysaccharide (LPS) stimulation, and of adipose stromal cells pre- and post-differentiation. In addition, we demonstrate that Live-seq can function as a transcriptomic recorder by preregistering the transcriptomes of individual macrophages that were subsequently monitored by time-lapse imaging after LPS exposure. This enabled the unsupervised, genome-wide ranking of genes on the basis of their ability to affect macrophage LPS response heterogeneity, revealing basal Nfkbia expression level and cell cycle state as important phenotypic determinants, which we experimentally validated. Thus, Live-seq can address a broad range of biological questions by transforming scRNA-seq from an end-point to a temporal analysis approach.


Recording morphogen signals reveals origins of gastruloid symmetry breaking.

  • Harold M McNamara‎ et al.
  • bioRxiv : the preprint server for biology‎
  • 2023‎

When cultured in three dimensional spheroids, mammalian stem cells can reproducibly self-organize a single anterior-posterior axis and sequentially differentiate into structures resembling the primitive streak and tailbud. Whereas the embryo's body axes are instructed by spatially patterned extra-embryonic cues, it is unknown how these stem cell gastruloids break symmetry to reproducibly define a single anterior-posterior (A-P) axis. Here, we use synthetic gene circuits to trace how early intracellular signals predict cells' future anterior-posterior position in the gastruloid. We show that Wnt signaling evolves from a homogeneous state to a polarized state, and identify a critical 6-hour time period when single-cell Wnt activity predicts future cellular position, prior to the appearance of polarized signaling patterns or morphology. Single-cell RNA sequencing and live-imaging reveal that early Wnt-high and Wnt-low cells contribute to distinct cell types and suggest that axial symmetry breaking is driven by sorting rearrangements involving differential cell adhesion. We further extend our approach to other canonical embryonic signaling pathways, revealing that even earlier heterogeneity in TGFβ signaling predicts A-P position and modulates Wnt signaling during the critical time period. Our study reveals a sequence of dynamic cellular processes that transform a uniform cell aggregate into a polarized structure and demonstrates that a morphological axis can emerge out of signaling heterogeneity and cell movements even in the absence of exogenous patterning cues.


A compact multiphoton 3D imaging system for recording fast neuronal activity.

  • Dejan Vucinić‎ et al.
  • PloS one‎
  • 2007‎

We constructed a simple and compact imaging system designed specifically for the recording of fast neuronal activity in a 3D volume. The system uses an Yb:KYW femtosecond laser we designed for use with acousto-optic deflection. An integrated two-axis acousto-optic deflector, driven by digitally synthesized signals, can target locations in three dimensions. Data acquisition and the control of scanning are performed by a LeCroy digital oscilloscope. The total cost of construction was one order of magnitude lower than that of a typical Ti:sapphire system. The entire imaging apparatus, including the laser, fits comfortably onto a small rig for electrophysiology. Despite the low cost and simplicity, the convergence of several new technologies allowed us to achieve the following capabilities: i) full-frame acquisition at video rates suitable for patch clamping; ii) random access in under ten microseconds with dwelling ability in the nominal focal plane; iii) three-dimensional random access with the ability to perform fast volume sweeps at kilohertz rates; and iv) fluorescence lifetime imaging. We demonstrate the ability to record action potentials with high temporal resolution using intracellularly loaded potentiometric dye di-2-ANEPEQ. Our design proffers easy integration with electrophysiology and promises a more widespread adoption of functional two-photon imaging as a tool for the study of neuronal activity. The software and firmware we developed is available for download at http://neurospy.org/ under an open source license.


  1. SciCrunch.org Resources

    Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.

  2. Navigation

    You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.

  3. Logging in and Registering

    If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.

  4. Searching

    Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:

    1. Use quotes around phrases you want to match exactly
    2. You can manually AND and OR terms to change how we search between words
    3. You can add "-" to terms to make sure no results return with that term in them (ex. Cerebellum -CA1)
    4. You can add "+" to terms to require they be in the data
    5. Using autocomplete specifies which branch of our semantics you with to search and can help refine your search
  5. Save Your Search

    You can save any searches you perform for quick access to later from here.

  6. Query Expansion

    We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.

  7. Collections

    If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.

  8. Facets

    Here are the facets that you can filter your papers by.

  9. Options

    From here we'll present any options for the literature, such as exporting your current results.

  10. Further Questions

    If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.

Publications Per Year

X

Year:

Count: