Skip to main content
Article
Near-Duplicate Image Retrieval Based on Contextual Descriptor
IEEE Signal Processing Letters
  • Jinliang Yao, Hangzhou Dianzi University
  • Bing Yang, Hangzhou Dianzi University
  • Qiuming Zhu, University of Nebraska at Omaha
Document Type
Article
Publication Date
9-1-2015
Disciplines
Abstract

The state of the art of technology for near-duplicate image retrieval is mostly based on the Bag-of-Visual-Words model. However, visual words are easy to result in mismatches because of quantization errors of the local features the words represent. In order to improve the precision of visual words matching, contextual descriptors are designed to strengthen their discriminative power and measure the contextual similarity of visual words. This paper presents a new contextual descriptor that measures the contextual similarity of visual words to immediately discard the mismatches and reduce the count of candidate images. The new contextual descriptor encodes the relationships of dominant orientation and spatial position between the referential visual words and their context. Experimental results on benchmark Copydays dataset demonstrate its efficiency and effectiveness for near-duplicate image retrieval.

Comments

© Copyright 2015 IEEE. The final published version of this article can be found at: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6975087.

Citation Information
Jinliang Yao, Bing Yang and Qiuming Zhu. "Near-Duplicate Image Retrieval Based on Contextual Descriptor" IEEE Signal Processing Letters Vol. 22 Iss. 9 (2015) p. 1404 - 1408
Available at: http://works.bepress.com/qiuming-zhu/8/