Skip to main content
Presentation
Bimodal Fusion in Audio-Visual Speech Recognition
Proceedings from the International Conference on Image Processing
  • Xiaozheng Zhang, Georgia Institute of Technology - Main Campus
  • Russell M. Mersereau, Georgia Institute of Technology - Main Campus
  • Mark A. Clements, Georgia Institute of Technology - Main Campus
Publication Date
1-1-2002
Abstract

Extending automatic speech recognition (ASR) to the visual modality has been shown to greatly increase recognition accuracy and improve system robustness over purely acoustic systems. especially in acoustically hostile environments. An important aspect of designing such systems is how to incorporate the visual component Into the acoustic speech recognizer to achieve optimal performance. In this paper, we investigate methods of Integrating the audio and visual modalities within HMM-based classification models. We examine existing integration schemes and propose the use of a coupled hidden Markov model (CHMM) to exploit audio-visual interaction. Our experimental results demonstrate that the CHMM consistently outperforms other integration models for a large range of acoustic noise levels and suggest that it better captures temporal correlations between the two streams of information.

Number of Pages
4
Publisher statement
Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Citation Information
Xiaozheng Zhang, Russell M. Mersereau and Mark A. Clements. "Bimodal Fusion in Audio-Visual Speech Recognition" Proceedings from the International Conference on Image Processing (2002)
Available at: http://works.bepress.com/jzhang/2/