Skip to main content
Article
Virtual Mirror By Fusing Multiple RGBD Cameras
Asia Pacific Signal and Information Processing Association Annual Summit & Conference
  • Ju Shen, University of Dayton
  • Sen-ching S. Cheung, University of Kentucky
  • Jian Zhao, University of Kentucky
Document Type
Conference Paper
Publication Date
12-1-2012
Abstract

Mirror is possibly the most common optical device in our everyday life. Rendering a virtual mirror using a joint camera-display system has a wide range of applications from cosmetics to medicine. Existing works focus primarily on simple modification of the mirror images of body parts and provide no or limited range of viewpoint dependent rendering. In this paper, we propose a framework for rendering mirror images from a virtual mirror based on 3D point clouds and color texture captured from a network of structured-light RGB-D cameras. We validate our models by comparing the results with a real mirror. Commodity structured-light cameras often have missing and erroneous depth data which directly affect the quality of the rendering. We address this problem via a novel probabilistic model that accurately separates foreground objects from background scene before correcting the erroneous depth data. We experimentally demonstrate that our depth correction algorithm outperforms other state-of-the-art techniques.

Inclusive pages
1-9
ISBN/ISSN
978-1-4673-4863-8
Comments

Permission documentation is on file.

Publisher
IEEE
Place of Publication
Hollywood, CA
Peer Reviewed
Yes
Citation Information
Ju Shen, Sen-ching S. Cheung and Jian Zhao. "Virtual Mirror By Fusing Multiple RGBD Cameras" Asia Pacific Signal and Information Processing Association Annual Summit & Conference (2012)
Available at: http://works.bepress.com/ju_shen/16/