top of page
Search

How "Seeing VR" Opens Up Virtual Reality for People with Low Vision

  • Writer: Nilotpal Biswas
    Nilotpal Biswas
  • May 4
  • 3 min read

Fig: The first image presents the original scene, while each subsequent image shows how the scene appears after a single tool has been applied.
Fig: The first image presents the original scene, while each subsequent image shows how the scene appears after a single tool has been applied.

Virtual-reality head-mounted displays deliver richly visual experiences, yet most commercial titles assume users have normal sight. For the world’s ≈ 217 million people with low vision; vision that cannot be corrected by ordinary glasses, standard VR scenes can be dim, low-contrast, or simply too distant to decipher. Until recently, research on VR accessibility focused almost exclusively on users who are totally blind; little had been done for those who still rely on residual sight. SeeingVR, introduced at CHI 2019 [1], tackles that gap head-on.


What SeeingVR Provides

SeeingVR is not a standalone application. It is a collection of 14 augmentation tools that layer visual or auditory assistance on top of any Unity-based VR title (Shown in the screenshot).


Category

Tool (key idea)

Lens-style visual aids

Magnification Lens; Bifocal Lens; Brightness Lens; Contrast Lens

Edge & context aids

Edge Enhancement; Peripheral Remapping

Text & depth aids

Text Augmentation; Text-to-Speech; Depth Measurement

Semantic aids (need developer labels)

Object Description; Highlight; Guideline; Recoloring

On-demand recognisers

Assistive Apps port of Seeing AI and VizWiz

Users can toggle any combination and adjust parameters such as zoom level, edge thickness, or speech rate on the fly. Nine of the tools work post-hoc via a plugin, no source code needed. While five that rely on semantic labels plug directly into a Unity project through a lightweight developer toolkit.


Evaluation of SeeingVR

SeeingVR underwent three evaluations:

  • User study – 11 volunteers with diverse low-vision conditions completed menu-selection, object-search, and target-shooting tasks. With SeeingVR they were significantly faster (e.g., menu selection time cut from ≈ 4.7 s to ≈ 2.2 s) and made virtually no errors; some tasks were impossible without the aids

  • Field test on top VR applications – The plugin augmented popular games such as Beat Saber and SUPERHOT VR with no developer input. Only shader-heavy titles needed minor tweaks

  • Developer feedback – Six Unity professionals integrated the toolkit within an hour and said they would use it in production; they preferred full integration over post-hoc patching so they could fine-tune the user experience

Participants said three tools really made a difference. The Depth Measurement feature acted like a bright laser pointer, so they could easily follow distant menus or fast-moving targets. Text Augmentation automatically enlarged, bolded, and recolored small labels, sparing them the usual guess-and-squint routine. Finally, the Highlight and Guideline overlays gave instant visual anchors for important objects, though testers suggested they should fade or dim during intense action scenes so the game still feels immersive.


Take-Away for Designers & Researchers

Accessibility need not be an after-thought. SeeingVR shows that common low-vision aids (magnification, contrast, contour overlays) translate well to 3-D spaces when they are modular and configurable. Just as alt-text became standard on the web, adding simple metadata in game engines like Unity [2], can unlock semantic cues like object descriptions and adaptive highlights.

Future VR headsets with pass-through cameras could apply SeeingVR’s overlays to the real world, giving wearers continuous assistance regardless of whether they are in a virtual scene, an AR application, or their living room. Until then, the SeeingVR toolkit offers a ready-to-use bridge between today’s visually demanding VR titles and the millions of gamers, students, and professionals who rely on low-vision aids every day.

The open-source plug-in and developer package are available on GitHub for anyone who wants to try them [3].


References

  1. Zhao, Y., Cutrell, E., Holz, C., Morris, M.R., Ofek, E. and Wilson, A.D., 2019, May. SeeingVR: A set of tools to make virtual reality more accessible to people with low vision. In Proceedings of the 2019 CHI conference on human factors in computing systems (pp. 1-14).

  2. Unity Technologies (2024) Unity User Manual (Version 2024.1). [Computer program documentation]. Available at: https://docs.unity3d.com (Accessed: 4 May 2025).

  3. SeeingVR toolkit source code. Available at: https://github.com/microsoft/SeeingVRtoolkit (Accessed: 4 May 2025).

 
 
 

Comments


logo ei.png
Embedded Interaction Lab

Department of Design, IIT Guwahati, 

Assam, India.

Phone: +91-361 258 2485

© 2024. All rights reserved.

bottom of page