Skip to main content
Article
Human Face Perception in Degraded Images
Journal of Visual Communication and Image Representation (1995)
  • Sanjiv Bhatia, University of Missouri-St. Louis
  • Vasudevan Lakshminarayanan
  • Ashok Samal, University of Nebraska - Lincoln
  • Grant V. Welland, University of Missouri–St. Louis
Abstract
This paper reports human performance data from a series of psychophysical experiments investigating the limits of stimulus parameters relevant to distinguishing a human face in a mug shot. In these experiments, we use a two-alternative forced-choice paradigm for response elicitation. We develop a benchmark that can be used to determine the performance of a machine vision system for human face detection at different levels of image degradation. The benchmark is developed in terms of the number of pixel blocks and the number of gray scales used in the images. The paper presents a model of representation that can be useful for recognition of faces in a database, and may be used to define the minimum image quality required for retrieval of facial records at different confidence levels. Our results show that low-frequency information in face images is useful since it is most resilient to degradation in the image quality. The model is particularly relevant to the retrieval of facial images in large image databases.
Publication Date
September, 1995
Citation Information
Sanjiv Bhatia, Vasudevan Lakshminarayanan, Ashok Samal and Grant V. Welland. "Human Face Perception in Degraded Images" Journal of Visual Communication and Image Representation Vol. 6 Iss. 3 (1995) p. 280 - 295
Available at: http://works.bepress.com/sanjiv-bhatia/21/