How Virtual Reality Can Empower Visually Impaired People
- Nilotpal Biswas
- Sep 29
- 3 min read

Virtual Reality (VR) is overwhelmingly seen as a visual experience, a technology built around what we can see. This perception creates a natural barrier for visually impairment people (VIPs), potentially excluding them from the next wave of digital interaction in shared persistent virtual environments, often called the metaverse. However, a paper by Lange et. al [1] explores a question: what if this visual-centric technology could be redesigned to make everyday tasks, like grocery shopping, more accessible for VIPs than they are in the real world?. The research delves into the challenges VIPs face and proposes how VR, augmented with voice assistants, can offer a powerful solution.
For many, a trip to the supermarket is a routine chore, but for a person with a visual impairment, it can be a significant obstacle. A supermarket is a dynamic and unpredictable space, filled with other shoppers, changing displays, and ambient noise, which makes it difficult to form a mental map or use auditory cues for orientation. Navigating aisles without bumping into obstacles or people is a constant concern. Beyond navigation, the simple act of finding and identifying a product is a major hurdle. Different varieties of a packaged product, like canned soup, can feel identical to the touch. While smartphone apps that read barcodes or text exist, they require precise camera alignment and are not always reliable. This process often means a VIP must touch many items, a concern for hygiene, and they can easily miss out on new products or sales promotions that are communicated visually.
This is where an immersive shopping experience in a virtual environment presents a unique opportunity. In a virtual supermarket, the physical risks are eliminated. A user can take their time to build a mental image of the store's layout without pressure or danger. Unlike the physical world, the entire store layout exists as data that the system can use to assist with navigation. Every item on every shelf is known to the system, meaning a user can interact with a product to have its full description, nutritional information, and price read aloud without ever needing to find a tiny barcode. The system could also alert the user to new products, point out nearby items of interest, and even make visual advertisements accessible by reading them aloud. This transforms shopping from a stressful challenge into a manageable and even pleasant experience.
Of course, a significant challenge remains: current VR technology is not built with accessibility in mind. Setting up a headset, navigating menus, and using controllers that rely on visual aim are often impossible for VIPs. The paper argues that the key to unlocking VR's potential is the integration of conversational user interfaces (CUIs), specifically voice assistants (VAs). A VA could guide a user through the entire setup process with voice commands. Within the virtual store, the user could simply ask, "Where is the pasta aisle?" and be guided there, or ask, "What is on the shelf in front of me?" to get a description. Movement itself could be controlled by voice, allowing users to teleport to different locations with a simple command. This combination of a data-rich virtual world and an intuitive voice interface is what makes an accessible VR experience possible.
The findings from this paper provide crucial takeaways for designing a VR shopping application specifically for users with partial visual impairment. While the research highlights solutions for all VIPs, a designer for this specific group can create a powerful multimodal experience that augments, rather than completely replaces, sight. The core principle of using the virtual environment's underlying data to enhance accessibility is key. This means integrating the proposed voice assistant to handle complex searches and navigation, thereby reducing the cognitive load and fatigue of visual searching. Simultaneously, the application should offer robust visual customization, such as high-contrast modes, adjustable text sizes, and a digital magnifier for examining product details, features impossible to implement in a physical store. An ideal interaction could combine modalities; for instance, a user could ask the voice assistant to "highlight all low-sugar cereals," and the system could respond by making those specific products glow brightly on the virtual shelf, making them easy to spot. By blending adaptable visual aids with the powerful auditory support of a voice assistant, designers can create a truly inclusive and empowering shopping experience that caters directly to the needs of individuals with partial vision.
References
Lange, D., Heuten, W., Salous, M. and Abdenebaoui, L. (2023) 'CeeUI – Accessible Virtual Reality for the Blind and Visually Impaired: Challenges and Opportunities'. Paper presented at: CUI@CHI: Inclusive Design of CUIs Across Modalities and Mobilities, Hamburg, Germany, 23 April 2023.



Comments