Skip to main content
Article
Facilitating Natural Conversational Agent Interactions: Lessons from a Deception Experiment
Information Systems and Quantitative Analysis Faculty Proceedings & Presentations
  • Ryan M. Schuetzler, University of Nebraska at Omaha
  • Mark Grimes, University of Arizona
  • Justin Scott Giboney, University at Albany, State University of New York
  • Joesph Buckman, University of Arizona
Document Type
Conference Proceeding
Publication Date
12-1-2014
Abstract

This study reports the results of a laboratory experiment exploring interactions between humans and a conversational agent. Using the ChatScript language, we created a chat bot that asked participants to describe a series of images. The two objectives of this study were (1) to analyze the impact of dynamic responses on participants’ perceptions of the conversational agent, and (2) to explore behavioral changes in interactions with the chat bot (i.e. response latency and pauses) when participants engaged in deception. We discovered that a chat bot that provides adaptive responses based on the participant’s input dramatically increases the perceived humanness and engagement of the conversational agent. Deceivers interacting with a dynamic chat bot exhibited consistent response latencies and pause lengths while deceivers with a static chat bot exhibited longer response latencies and pause lengths. These results give new insights on social interactions with computer agents during truthful and deceptive interactions.

Comments

© 2014 Association for Information Systems. This conference proceeding was originally published here: http://aisel.aisnet.org/icis2014/proceedings/HCI/9/.

Thirty Fifth International Conference on Information Systems, Auckland 2014.

Citation Information
Ryan M. Schuetzler, Mark Grimes, Justin Scott Giboney and Joesph Buckman. "Facilitating Natural Conversational Agent Interactions: Lessons from a Deception Experiment" (2014)
Available at: http://works.bepress.com/rschuetzler/2/