Skip to main content
Unpublished Paper
Multiple Bernoulli Relevance Models for Image and Video Annotation
(2004)
  • S. L. Feng
  • R. Manmatha, University of Massachusetts - Amherst
  • V. Lavrenko
Abstract

Retrieving images in response to textual queries requires some knowledge of the semantics of the picture. Here, we show how we can do both automatic image annotation and retrieval (using one word queries) from images and videos using a multiple Bernoulli relevance model. The model assumes that a training set of images or videos along with keyword annotations is provided. Multiple keywords are provided for an image and the specific correspondence between a keyword and an image is not provided. Each image is partitioned into a set of rectangular regions and a real-valued feature vector is computed over these regions. The relevance model is a joint probability distribution of the word annotations and the image feature vectors and is computed using the training set. The word probabilities are estimated using a multiple Bernoulli model and the image feature probabilities using a non-parametric kernel density estimate. The model is then used to annotate images in a test set. We show experiments on both images from a standard Corel data set and a set of video key frames from NIST's Video Trec. Comparative experiments show that the model performs better than a model based on estimating word probabilities using the popular multinomial distribution. The results also show that our model significantly outperforms previously reported results on the task of image and video annotation.

Disciplines
Publication Date
2004
Comments
This is the pre-published version harvested from CIIR.
Citation Information
S. L. Feng, R. Manmatha and V. Lavrenko. "Multiple Bernoulli Relevance Models for Image and Video Annotation" (2004)
Available at: http://works.bepress.com/r_manmatha/28/