Intelligent Assistant for People with Low Vision Abilities

This paper proposes a wearable system for visually impaired people that can be utilized to obtain an extensive feedback about their surrounding environment. Our system consists of a stereo camera and smartglasses, communicating with a smartphone that is used as an intermediary computational device. Furthermore, the system is connected to a server where all the expensive computations are executed. The whole setup is capable of detecting obstacles in the nearest surrounding, recognizing faces and facial expressions, reading texts, providing a generic description and question answering of a particular input image. In addition , we propose a novel depth question answering system to estimate object size as well as objects relative position in an unconstrained environment in near real-time and in a fully automatic way requiring only stereo image pair and voice request as an input. We have conducted a series of experiments to evaluate the feasibility and practicality of the proposed system which shows promising results to assist visually impaired people.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here