Visually Impaired Aid using Convolutional Neural Networks, Transfer Learning, and Particle Competition and Cooperation

9 May 2020  ·  Fabricio Breve, Carlos Norberto Fischer ·

Navigation and mobility are some of the major problems faced by visually impaired people in their daily lives. Advances in computer vision led to the proposal of some navigation systems. However, most of them require expensive and/or heavy hardware. In this paper we propose the use of convolutional neural networks (CNN), transfer learning, and semi-supervised learning (SSL) to build a framework aimed at the visually impaired aid. It has low computational costs and, therefore, may be implemented on current smartphones, without relying on any additional equipment. The smartphone camera can be used to automatically take pictures of the path ahead. Then, they will be immediately classified, providing almost instantaneous feedback to the user. We also propose a dataset to train the classifiers, including indoor and outdoor situations with different types of light, floor, and obstacles. Many different CNN architectures are evaluated as feature extractors and classifiers, by fine-tuning weights pre-trained on a much larger dataset. The graph-based SSL method, known as particle competition and cooperation, is also used for classification, allowing feedback from the user to be incorporated without retraining the underlying network. 92\% and 80\% classification accuracy is achieved in the proposed dataset in the best supervised and SSL scenarios, respectively.

PDF Abstract

Datasets


Introduced in the Paper:

VIA

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here