News
秘密直播 Researchers Develop AI-Enhanced Navigation System for Visually Impaired
Researchers from two 秘密直播 divisions 鈥 the Applied Physics Laboratory in Laurel, Maryland, and the in Baltimore 鈥 have collaborated to develop a navigation system that enables blind or visually impaired users to navigate their surroundings with greater confidence and accuracy.
The system leverages artificial intelligence (AI) to map environments, track users鈥 locations and provide real-time guidance. It also processes information from depth imaging sensors and RGB 鈥 the red, green and blue channels used in imaging sensors to capture visual information 鈥 to produce detailed semantic mappings of the environment, allowing the navigation system to not only recognize obstacles but also identify specific objects and their properties. This capability can enable users to query the system for guidance to specific objects or features within their surroundings, making navigation more intuitive and effective.
What makes this system particularly innovative is its ability to significantly enhance the interpretability of the environment for users, explained lead researcher Nicolas Norena Acosta, a robotics research software engineer at APL.
鈥淭raditional navigation systems for the visually impaired often rely on basic sensor-based mapping, which can only distinguish between occupied and unoccupied spaces,鈥 he said. 鈥淭he new semantic mapping approach, however, provides a much richer understanding of the environment, enabling high-level human-computer interactions.鈥
Current prosthetic vision devices can only stimulate a small area of vision, providing minimal visual feedback that鈥檚 not robust enough for users to navigate their environment safely and independently. Norena Acosta and his team 鈥 Chigozie Ewulum, Michael Pekala and Seth Billings from APL and Marin Kobilarov from the Whiting School鈥檚 Department of Mechanical Engineering 鈥 enhanced this basic visual feedback with additional haptic, visual and auditory sensory inputs to create a more comprehensive navigation system.
The haptic feedback involves an APL-developed headband that vibrates in different places to indicate the direction of obstacles or the path the user should follow. For example, if the path is to the right, the right side of the headband will vibrate. The auditory feedback uses voice prompts and spatial sound to give verbal directions and alerts about the surroundings.
The combined sensory inputs to the system are also translated into visual feedback that enhances the user鈥檚 ability to perceive obstacles and navigate effectively. The system provides a clear, simplified view of the environment, highlighting only the most critical information needed to avoid obstacles and move safely.
鈥淭he challenge was creating a system that could synchronize and process multiple types of sensory data in real time,鈥 Norena Acosta explained. 鈥淎ccurately integrating the visual, haptic and auditory feedback required sophisticated algorithms and robust computing power, as well as advanced AI techniques.鈥
The research was presented in April at . The system is currently being tested in a clinical trial, with results expected this summer.
This work is funded by the National Eye Institute to capitalize on recent advances in computer vision 鈥 including developments in object recognition, depth sensing and simultaneous localization and mapping technologies 鈥 to augment the capabilities of commercial retinal prostheses.
Billings, the principal investigator of this effort, said that a robust, intuitive navigation aid like this system has the potential to improve the independence and mobility of its users significantly.
鈥淭he potential impact of this work on patient populations is substantial,鈥 said Billings. 鈥淭his could lead to greater social inclusion and participation in daily activities, ultimately enhancing the overall quality of life for blind and visually impaired individuals.鈥
Research reported in this publication was supported by the National Eye Institute of the National Institutes of Health under Award Number R01EY029741. Collaborators on this work also included Gislin Dagnelie, 秘密直播 University Wilmer Eye Institute; Roberta Klatzky, Carnegie Mellon University; and Avi Caspi. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.