Skip to main content
Near-Duplicate Image Retrieval Based on Contextual Descriptor
IEEE Signal Processing Letters
  • Jinliang Yao, Hangzhou Dianzi University
  • Bing Yang, Hangzhou Dianzi University
  • Qiuming Zhu, University of Nebraska at Omaha
Document Type
Publication Date
The state of the art of technology for near-duplicate image retrieval is mostly based on the Bag-of-Visual-Words model. However, visual words are easy to result in mismatches because of quantization errors of the local features the words represent. In order to improve the precision of visual words matching, contextual descriptors are designed to strengthen their discriminative power and measure the contextual similarity of visual words. This paper presents a new contextual descriptor that measures the contextual similarity of visual words to immediately discard the mismatches and reduce the count of candidate images. The new contextual descriptor encodes the relationships of dominant orientation and spatial position between the referential visual words and their context. Experimental results on benchmark Copydays dataset demonstrate its efficiency and effectiveness for near-duplicate image retrieval.

© Copyright 2015 IEEE. The final published version of this article can be found at:

Citation Information
Jinliang Yao, Bing Yang and Qiuming Zhu. "Near-Duplicate Image Retrieval Based on Contextual Descriptor" IEEE Signal Processing Letters Vol. 22 Iss. 9 (2015) p. 1404 - 1408
Available at: