Skip to main content
Perceived similarity and visual descriptions in content-based image retrieval
Faculty of Informatics - Papers (Archive)
  • Yuan Zhong, University of Wollongong
  • Lei Ye, University of Wollongong
  • Wanqing Li, University of Wollongong
  • Philip Ogunbona, University of Wollongong
Publication Date
Publication Details

Zhong, Y., Ye, L., Li, W. & Ogunbona, P. (2007). Perceived similarity and visual descriptions in content-based image retrieval. The IEEE International Symposium on Multimedia (pp. 173-180). IEEE Computer Society Press.

The use of low-level feature descriptors is pervasive in content-based image retrieval tasks and the answer to the question of how well these features describe users’ intention is inconclusive. In this paper we devise experiments to gauge the degree of alignment between the description of target images by humans and that implicitly provided by low-level image feature descriptors. Data was collected on how humans perceive similarity in images. Using images judged by humans to be similar, as ground truth, the performance of some MPEG-7 visual feature descriptors were evaluated. It is found that various descriptors play different roles in different queries and their appropriate combination can improve the performance of retrieval tasks. This forms a basis for the development of adaptive weight assignment to features depending on the query and retrieval task.
Citation Information
Yuan Zhong, Lei Ye, Wanqing Li and Philip Ogunbona. "Perceived similarity and visual descriptions in content-based image retrieval" (2007) p. 173 - 180
Available at: