Skip to main content
Article
Sign Classification for the Visually Impaired
University of Massachusetts - Amherst Technical Report (2005)
  • Marwan A Mattar, University of Massachusetts - Amherst
  • Allen R Hanson, University of Massachusetts - Amherst
  • Erik G Learned-Miller, University of Massachusetts - Amherst
Abstract

Our world is populated with visual information that a sighted person makes use of daily. Unfortunately, the visually impaired are deprived from such information, which limits their mobility in unconstrained environments. To help alleviate this we are developing a wearable system that is capable of detecting and recognizing signs in natural scenes. The system is composed of two main components, sign detection and recognition. The sign detector, uses a conditional maximum entropy model to find regions in an image that correspond to a sign. The sign recognizer matches the hypothesized sign regions with sign images in a database. The system decides if the most likely sign is correct or if the hypothesized sign region does not belong to a sign in the database. Our data sets encompass a wide range of variability including changes in lighting, orientation and viewing angle. In this paper, we present an overview of the system and the performance of its two main components, while paying particular attention to the recognition phase. Tested on 3,975 sign images from two different data sets, the recognition phase achieves 99.5 % with 35 distinct signs and 92.8 % with 65 distinct signs.

Disciplines
Publication Date
2005
Citation Information
Marwan A Mattar, Allen R Hanson and Erik G Learned-Miller. "Sign Classification for the Visually Impaired" University of Massachusetts - Amherst Technical Report Vol. 05 Iss. 14 (2005)
Available at: http://works.bepress.com/erik_learned_miller/20/