To read this content please select one of the options below:

Visual noun navigation framework for the blind

Edgardo Molina (Based in the Department of Computer Science, City College of New York – CUNY, New York, New York, USA)
Alpha Diallo (Based in the Department of Computer Science, City College of New York – CUNY, New York, New York, USA)
Zhigang Zhu (Based in the Department of Computer Science, City College of New York – CUNY, New York, New York, USA)

Journal of Assistive Technologies

ISSN: 1754-9450

Article publication date: 14 June 2013

677

Abstract

Propose

The purpose of this paper is to propose a local orientation and navigation framework based on visual features that provide location recognition, context augmentation, and viewer localization information to a blind or low‐vision user.

Design/methodology/approach

The authors consider three types of “visual noun” features: signage, visual‐text, and visual‐icons that are proposed as a low‐cost method for augmenting environments. These are used in combination with an RGB‐D sensor and a simplified SLAM algorithm to develop a framework for navigation assistance suitable for the blind and low‐vision users.

Findings

It was found that signage detection cannot only help a blind user to find a location, but can also be used to give accurate orientation and location information to guide the user navigating a complex environment. The combination of visual nouns for orientation and RGB‐D sensing for traversable path finding can be one of the cost‐effective solutions for navigation assistance for blind and low‐vision users.

Research limitations/implications

This is the first step for a new approach in self‐localization and local navigation of a blind user using both signs and 3D data. The approach is meant to be cost‐effective but it only works in man‐made scenes where a lot of signs exist or can be placed and are relatively permanent in their appearances and locations.

Social implications

Based on 2012 World Health Organization, 285 million people are visually impaired, of which 39 million are blind. This project will have a direct impact on this community.

Originality/value

Signage detection has been widely studied for assisting visually impaired people in finding locations, but this paper provides the first attempt to use visual nouns as visual features to accurately locate and orient a blind user. The combination of visual nouns with 3D data from an RGB‐D sensor is also new.

Keywords

Citation

Molina, E., Diallo, A. and Zhu, Z. (2013), "Visual noun navigation framework for the blind", Journal of Assistive Technologies, Vol. 7 No. 2, pp. 118-130. https://doi.org/10.1108/17549451311328790

Publisher

:

Emerald Group Publishing Limited

Copyright © 2013, Emerald Group Publishing Limited

Related articles