Near-Duplicate Image Retrieval Based on Contextual DescriptorIEEE Signal Processing Letters
AbstractThe state of the art of technology for near-duplicate image retrieval is mostly based on the Bag-of-Visual-Words model. However, visual words are easy to result in mismatches because of quantization errors of the local features the words represent. In order to improve the precision of visual words matching, contextual descriptors are designed to strengthen their discriminative power and measure the contextual similarity of visual words. This paper presents a new contextual descriptor that measures the contextual similarity of visual words to immediately discard the mismatches and reduce the count of candidate images. The new contextual descriptor encodes the relationships of dominant orientation and spatial position between the referential visual words and their context. Experimental results on benchmark Copydays dataset demonstrate its efficiency and effectiveness for near-duplicate image retrieval.
Citation InformationJinliang Yao, Bing Yang and Qiuming Zhu. "Near-Duplicate Image Retrieval Based on Contextual Descriptor" IEEE Signal Processing Letters Vol. 22 Iss. 9 (2015) p. 1404 - 1408
Available at: http://works.bepress.com/qiuming-zhu/8/