Skip to main content
Contribution to Book
A Graphical Digital Personal Assistant That Grounds and Learns Autonomously
HAI '17 Proceedings of the 5th International Conference on Human Agent Interaction
  • Casey Kennington, Boise State University
  • Aprajita Shukla, Boise State University
Document Type
Conference Proceeding
Publication Date
1-1-2017
DOI
https://doi.org/10.1145/3125739.3132592
Disciplines
Abstract

We present a speech-driven digital personal assistant that is robust despite little or no training data and autonomously improves as it interacts with users. The system is able to establish and build common ground between itself and users by signaling understanding and by learning a mapping via interaction between the words that users actually speak and the system actions. We evaluated our system with real users and found an overall positive response. We further show through objective measures that autonomous learning improves performance in a simple itinerary filling task.

Copyright Statement

This is an author-produced, peer-reviewed version of this article. The final, definitive version of this document can be found online at HAI '17 Proceedings of the 5th International Conference on Human Agent Interaction, published by ACM - Association for Computing Machinery. Copyright restrictions may apply. doi: 10.1145/3125739.3132592

Citation Information
Casey Kennington and Aprajita Shukla. "A Graphical Digital Personal Assistant That Grounds and Learns Autonomously" HAI '17 Proceedings of the 5th International Conference on Human Agent Interaction (2017)
Available at: http://works.bepress.com/casey-kennington/7/