Skip to main content
Unpublished Paper
An Efficient Framework for Searching Text in Noisy Document Images
(2011)
  • Ismet Zeki Yalniz
  • R. Manmatha, University of Massachusetts - Amherst
Abstract

An efficient word spotting framework is proposed to search text in scanned books. The proposed method allows one to search for words when optical character recognition (OCR) fails due to noise or for languages where there is no OCR. Given a query word image, the aim is to retrieve matching words in the book sorted by the similarity. In the offline stage, SIFT descriptors are extracted over the corner points of each word image. Those features are quantized into visual terms (visterms) using hierarchical K-Means algorithm and indexed using an inverted file. In the query resolution stage, the candidate matches are efficiently identified using the inverted index. These word images are then forwarded to the next stage where the configuration of visterms on the image plane are tested. Configuration matching is efficiently performed by projecting the visterms on the horizontal axis and searching for the Longest Common Sebsequence (LCS) between the sequences of visterms. The proposed framework is tested on one English and two Telugu books. It is shown that the proposed method resolves a typical user query under 10 miliseconds providing very high retrieval accuracy (Mean Average Precision 0.93). The search accuracy for the English book is comparable to searching text in the high accuracy output of a commercial OCR engine.

Keywords
  • document image search; image retrieval; word spotting
Disciplines
Publication Date
2011
Comments
This is the pre-published version harvested from CIIR.
Citation Information
Ismet Zeki Yalniz and R. Manmatha. "An Efficient Framework for Searching Text in Noisy Document Images" (2011)
Available at: http://works.bepress.com/r_manmatha/50/