Skip to main content
Article
Emotion Detection From Infant Facial Expressions And Cries
International Conference on Acoustics, Speech, and Signal Processing (2006)
  • Pritam Pal
  • Ananth N Iyer
  • Robert E Yantorno, Temple University
Abstract
A new system for translating the infant cries from its facial image and cry sounds is presented in this paper. The system is designed to analyze the facial image and sound of the crying infant to derive the reason why the infant is crying. The image and the sound represent the same cry event. The image processing module determines the state of certain facial features, certain combinations of which determine the reason for crying. The sound processing module analyzes the data for the fundamental frequency and the first two formants and uses k-means clustering to determine the reason of the cry. The decisions from the image and sound processing modules are then fused using a decision level fusion system. The overall accuracy of the image and sound processing modules are 64% and 74.2%, respectively, and that of the fused decision is 75.2%.
Disciplines
Publication Date
May, 2006
Citation Information
Pritam Pal, Ananth N Iyer and Robert E Yantorno. "Emotion Detection From Infant Facial Expressions And Cries" International Conference on Acoustics, Speech, and Signal Processing Vol. 2 (2006)
Available at: http://works.bepress.com/iyer/10/