This service exclusively searches for literature that cites resources. Please be aware that the total number of searchable documents is limited to those containing RRIDs and does not include all open-access literature.
Most patient-reported outcome measures used in ophthalmology show floor effects in a very low vision population, which limits their use in vision restoration trials. The Impact of Vision Impairment-Very Low Vision scale (IVI-VLV) was developed to specifically target a very low vision population, but its test-retest reliability has not been investigated yet.
Low vision reduces text visibility and causes difficulties in reading. A valid low-vision simulation could be used to evaluate the accessibility of digital text for readers with low vision. We examined the validity of a digital simulation for replicating the text visibility and reading performance of low-vision individuals.
The purpose of this study was to design and evaluate an instrument for assessing vision-related quality of life appropriate for the specific visual impairment characteristic for all stages of age-related macular degeneration (AMD), with a focus on the low luminance deficit in early/intermediate stages.
MNREAD is an advanced near-vision acuity chart that has already been translated and validated in Greek language. Considering that no validated Greek digital near-vision test exists, our primary objective was to develop and validate a digital near-vision reading test based on the fundamental properties of the Greek printed MNREAD (MNREAD-GR).
Color guides many important behaviors in birds. Previously we have shown that the intensity threshold for color discrimination in the chicken depends on the color contrast between stimuli and their brightness. The birds could discriminate larger color contrasts and brighter colors in lower light intensities. We suggested that chickens use spatial summation of cone signals to maintain color vision in low light levels. Here we tested this hypothesis by determining the intensity thresholds of color discrimination using similar stimuli, patterns of grey tiles of varying intensity interspersed with color tiles, adjusted for this specific aim. Chickens could discriminate stimuli with a larger single color tile, or with a larger proportion of small color tiles, in lower light intensities. This is in agreement with the hypothesis that spatial summation improves color discrimination in low light levels. There was no difference in the intensity threshold for discrimination of stimuli with a single 6×6mm color tile, stimuli with 30% colored tiles and stimuli in which color filled the whole pattern. This gives a first indication to the degree of spatial summation that can be performed. We compare this level of spatial summation to predictions from mathematical model calculations.
Visual impairment is an independent risk factor for falling. Whether this extends to patient-reported visual difficulties has not been assessed to date. We have evaluated whether patient-reported visual difficulties in low-contrast and low luminance situations are a risk factor for falls and concerns about falling.
In this paper, we introduce a method for automated seaweed growth monitoring by combining a low-cost RGB and stereo vision camera. While current vision-based seaweed growth monitoring techniques focus on laboratory measurements or above-ground seaweed, we investigate the feasibility of the underwater imaging of a vertical seaweed farm. We use deep learning-based image segmentation (DeeplabV3+) to determine the size of the seaweed in pixels from recorded RGB images. We convert this pixel size to meters squared by using the distance information from the stereo camera. We demonstrate the performance of our monitoring system using measurements in a seaweed farm in the River Scheldt estuary (in The Netherlands). Notwithstanding the poor visibility of the seaweed in the images, we are able to segment the seaweed with an intersection of the union (IoU) of 0.9, and we reach a repeatability of 6% and a precision of the seaweed size of 18%.
Blindness and low vision are thought to be common in southern Sudan. However, the magnitude and geographical distribution are largely unknown. We aimed to estimate the prevalence of blindness and low vision, identify the main causes of blindness and low vision, and estimate targets for blindness prevention programs in Mankien payam (district), southern Sudan.
Pedestrians with low vision are at risk of injury when hazards, such as steps and posts, have low visibility. This study aims at validating the software implementation of a computational model that estimates hazard visibility. The model takes as input a photorealistic 3D rendering of an architectural space, and the acuity and contrast sensitivity of a low-vision observer, and outputs estimates of the visibility of hazards in the space. Our experiments explored whether the model could predict the likelihood of observers correctly identifying hazards. In Experiment 1, we tested fourteen normally sighted subjects with blur goggles that simulated moderate or severe acuity reduction. In Experiment 2, we tested ten low-vision subjects with moderate to severe acuity reduction. Subjects viewed computer-generated images of a walkway containing five possible targets ahead-big step-up, big step-down, small step-up, small step-down, or a flat continuation. Each subject saw these stimuli with variations of lighting and viewpoint in 250 trials and indicated which of the five targets was present. The model generated a score on each trial that estimated the visibility of the target. If the model is valid, the scores should be predictive of how accurately the subjects identified the targets. We used logistic regression to examine the correlation between the scores and the participants' responses. For twelve of the fourteen normally sighted subjects with artificial acuity reduction and all ten low-vision subjects, there was a significant relationship between the scores and the participant's probability of correct identification. These experiments provide evidence for the validity of a computational model that predicts the visibility of architectural hazards. It lays the foundation for future validation of this hazard evaluation tool, which may be useful for architects to assess the visibility of hazards in their designs, thereby enhancing the accessibility of spaces for people with low vision.
The incidence of high myopia is increasing worldwide with myopic maculopathy, a complication of myopia, often progressing to blindness. Our two-stage genome-wide association study of myopic maculopathy identifies a susceptibility locus at rs11873439 in an intron of CCDC102B (P = 1.77 × 10-12 and Pcorr = 1.61 × 10-10). In contrast, this SNP is not significantly associated with myopia itself. The association between rs11873439 and myopic maculopathy is further confirmed in 2317 highly myopic patients (P = 2.40 × 10-6 and Pcorr = 1.72 × 10-4). CCDC102B is strongly expressed in the retinal pigment epithelium and choroids, where atrophic changes initially occur in myopic maculopathy. The development of myopic maculopathy thus likely exhibits a unique background apart from the development of myopia itself; elucidation of the roles of CCDC102B in myopic maculopathy development may thus provide insights into preventive methods for blindness in patients with high myopia.
Chronic Low Back Pain (LBP) is a symptom that may be caused by several diseases, and it is currently the leading cause of disability worldwide. The increased amount of digital images in orthopaedics has led to the development of methods related to artificial intelligence, and to computer vision in particular, which aim to improve diagnosis and treatment of LBP. In this manuscript, we have systematically reviewed the available literature on the use of computer vision in the diagnosis and treatment of LBP. A systematic research of PubMed electronic database was performed. The search strategy was set as the combinations of the following keywords: "Artificial Intelligence", "Feature Extraction", "Segmentation", "Computer Vision", "Machine Learning", "Deep Learning", "Neural Network", "Low Back Pain", "Lumbar". Results: The search returned a total of 558 articles. After careful evaluation of the abstracts, 358 were excluded, whereas 124 papers were excluded after full-text examination, taking the number of eligible articles to 76. The main applications of computer vision in LBP include feature extraction and segmentation, which are usually followed by further tasks. Most recent methods use deep learning models rather than digital image processing techniques. The best performing methods for segmentation of vertebrae, intervertebral discs, spinal canal and lumbar muscles achieve Sørensen-Dice scores greater than 90%, whereas studies focusing on localization and identification of structures collectively showed an accuracy greater than 80%. Future advances in artificial intelligence are expected to increase systems' autonomy and reliability, thus providing even more effective tools for the diagnosis and treatment of LBP.
A recent trend in low vision rehabilitation has been the use of portable head-mounted displays to enhance residual vision. Our study confirms the feasibility of telerehabilitation and informs the development of evidence-based recommendations to improve telerehabilitation interventions to reduce device abandonment.
Hazard detection is fundamental for a safe lunar landing. State-of-the-art autonomous lunar hazard detection relies on 2D image-based and 3D Lidar systems. The lunar south pole is challenging for vision-based methods. The low sun inclination and the terrain rich in topographic features create large areas in shadow, hiding the terrain features. The proposed method utilizes a vision transformer (ViT) model, which is a deep learning architecture based on the transformer blocks used in natural language processing, to solve this problem. Our goal is to train the ViT model to extract terrain features information from low-light RGB images. The results show good performances, especially at high altitudes, beating the UNet, one of the most popular convolutional neural networks, in every scenario.
There is evidence that a pen-and-paper training based on perceptual learning principles improves near visual acuity in young children with visual impairment. The aim of the present study is to measure specificity and retention of its training effects during one year. Sixteen visually impaired children aged 4-8 years were divided in two age- and acuity-matched groups: an early (n = 9) and late treatment group (n = 7). Training consisted of 12 sessions (2× per week for 6 weeks). Studied variables were uncrowded and crowded binocular near visual acuity (40 cm), distance visual acuity (3.0 m) and fine motor skills (Beery VMI, subtest Motor Control). In the early treatment group, we measured at 0 months (pre-training), at 2 months (post-training), at 8 months (6 months post-training) and at 14 months (12 months post-training) since inclusion. In the late treatment group, three pre-training measurements were performed at 0, 2 and 8 months, and two measurements at 0 and 6 months post-training. In the short term, training improved uncrowded and crowded near visual acuity at 0.4 m by 0.13 ± 0.03 and 0.09 ± 0.03 logMAR, respectively (mean ± SEM). Training did not affect distance acuities or Beery scores. Learning effects on uncrowded and crowded near visual acuities remained intact 6-12 months after training. We conclude that the pen-and-paper training specifically improves near visual acuities but does not transfer to distance acuities or fine motor skills. Improvements in near visual acuity are retained over time, bolstering its clinical value.
Visual functioning questionnaires are commonly used as patient-reported outcome measures to estimate visual ability. Performance measures, on the other hand, provide a direct measure of visual ability. For individuals with ultra-low vision (ULV; visual acuity (VA) <20/1600), the ultra-low vision visual functioning questionnaire (ULV-VFQ) and the Wilmer VRI-a virtual reality-based performance test-estimate self-reported and actual visual ability, respectively, for activities of daily living. But how well do self-reports from ULV-VFQ predict actual task performance in the Wilmer VRI?
Understanding longitudinal changes in why individuals frequent low-vision clinics is crucial for ensuring that patient care keeps current with changing technology and changing lifestyles. Among other findings, our results suggest that reading remains a prevailing patient complaint, with shifting priorities toward technology-related topics.
Welcome to the FDI Lab - SciCrunch.org Resources search. From here you can search through a compilation of resources used by FDI Lab - SciCrunch.org and see how data is organized within our community.
You are currently on the Community Resources tab looking through categories and sources that FDI Lab - SciCrunch.org has compiled. You can navigate through those categories from here or change to a different tab to execute your search through. Each tab gives a different perspective on data.
If you have an account on FDI Lab - SciCrunch.org then you can log in from here to get additional features in FDI Lab - SciCrunch.org such as Collections, Saved Searches, and managing Resources.
Here is the search term that is being executed, you can type in anything you want to search for. Some tips to help searching:
You can save any searches you perform for quick access to later from here.
We recognized your search term and included synonyms and inferred terms along side your term to help get the data you are looking for.
If you are logged into FDI Lab - SciCrunch.org you can add data records to your collections to create custom spreadsheets across multiple sources of data.
Here are the facets that you can filter your papers by.
From here we'll present any options for the literature, such as exporting your current results.
If you have any further questions please check out our FAQs Page to ask questions and see our tutorials. Click this button to view this tutorial again.
Year:
Count: