Skip to main content
Article
Background modeling using adaptive pixelwise kernel variances in a hybrid feature space
IEEE Conference on Computer Vision and Pattern Recognition (2012)
  • Erik G Learned-Miller, University of Massachusetts - Amherst
  • Manjunath Narayana
  • Allen Hanson
Abstract

Recent work on background subtraction has shown de- velopments on two major fronts. In one, there has been increasing sophistication of probabilistic models, from mix- tures of Gaussians at each pixel [ 7 ], to kernel density esti- mates at each pixel [ 1 ], and more recently to joint domain- range density estimates that incorporate spatial informa- tion [ 6 ]. Another line of work has shown the benefits of increasingly complex feature representations, including the use of texture information, local binary patterns, and re- cently scale-invariant local ternary patterns [ 4 ]. In this work, we use joint domain-range based estimates for back- ground and foreground scores and show that dynamically choosing kernel variances in our kernel estimates at each individual pixel can significantly improve results. We give a heuristic method for selectively applying the adaptive ker- nel calculations which is nearly as accurate as the full pro- cedure but runs much faster. We combine these modeling improvements with recently developed complex features [ 4 ] and show significant improvements on a standard back- grounding benchmark.

Disciplines
Publication Date
2012
Citation Information
Erik G Learned-Miller, Manjunath Narayana and Allen Hanson. "Background modeling using adaptive pixelwise kernel variances in a hybrid feature space" IEEE Conference on Computer Vision and Pattern Recognition (2012)
Available at: http://works.bepress.com/erik_learned_miller/46/