Skip to main content
Article
Automatic Content Generation for Video Self Modeling
IEEE International Conference on Multimedia and Expo
  • Ju Shen, University of Dayton
  • Anusha Raghunathan, Intel Corporation
  • Sen-ching S. Cheung, University of Kentucky
  • Ravi R. Patel, University of Kentucky
Document Type
Conference Paper
Publication Date
7-1-2011
Abstract
Video self modeling (VSM) is a behavioral intervention technique in which a learner models a target behavior by watching a video of him or herself. Its effectiveness in rehabilitation and education has been repeatedly demonstrated but technical challenges remain in creating video contents that depict previously unseen behaviors. In this paper, we propose a novel system that re-renders new talking-head sequences suitable to be used for VSM treatment of patients with voice disorder. After the raw footage is captured, a new speech track is either synthesized using text-to-speech or selected based on voice similarity from a database of clean speeches. Voice conversion is then applied to match the new speech to the original voice. Time markers extracted from the original and new speech track are used to re-sample the video track for lip synchronization. We use an adaptive re-sampling strategy to minimize motion jitter, and apply bilinear and optical-flow based interpolation to ensure the image quality. Both objective measurements and subjective evaluations demonstrate the effectiveness of the proposed techniques.
Inclusive pages
1-6
ISBN/ISSN
1945-7871
Comments

Funded by National Science Foundation Award No. 1018241. Archived in compliance with federal policy. Permission documentation is on file.

Publisher
IEEE
Place of Publication
Barcelona, Spain
Peer Reviewed
Yes
Citation Information
Ju Shen, Anusha Raghunathan, Sen-ching S. Cheung and Ravi R. Patel. "Automatic Content Generation for Video Self Modeling" IEEE International Conference on Multimedia and Expo (2011)
Available at: http://works.bepress.com/ju_shen/9/