Skip to main content
Article
Modeling Gaze Behavior for Virtual Demonstrators
Proceedings of the 11th International Conference on Intelligent Virtual Agents (2011)
  • Yazhou Huang
  • Justin L Matthews
  • Teenie Matlock
  • Marcelo Kallmann
Abstract
Achieving autonomous virtual humans with coherent and natural motions is key for being effective in many educational, training and therapeutic applications. Among several aspects to be considered, the gaze behavior is an important non-verbal communication channel that plays a vital role in the effectiveness of the obtained animations. This paper focuses on analyzing gaze behavior in demonstrative tasks involving arbitrary locations for target objects and listeners. Our analysis is based on full-body motions captured from human participants performing real demonstrative tasks in varied situations. We address temporal information and coordination with targets and observers at varied positions
Keywords
  • gaze model,
  • motion synthesis,
  • virtual humans,
  • virtual reality
Disciplines
Publication Date
2011
DOI
10.1007/978-3-642-23974-8_17
Citation Information
Huang, Y., Matthews, J.L., Matlock, T., & Kallmann, M. (2011). Modeling gaze behavior for virtual demonstrators. In H. Högni Vilhjálmsson et al. (Eds.), Proceedings paper of the 11th International Conference on Intelligent Virtual Agents, Reykjavík, Iceland, LNAI 6895 (pp. 155-161). Berlin/Heidelberg: Springer-Verlag. doi:10.1007/978-3-642-23974-8_17