Skip to main content
Spoken Language Interaction in a Goal-Directed Task
Proceedings of the Intl. Conf. on Acoustics, Speech, and Signal Processing
  • Alexander I Rudnicky, Carnegie Mellon University
  • Michelle Sakamoto, Carnegie Mellon University
  • Joseph H Polifroni, Carnegie Mellon University
Date of Original Version
Conference Proceeding
Rights Management
Authorized licensed use limited to: Carnegie Mellon Libraries. Restrictions apply.
Abstract or Description

To study the spoken language interface in the context of a complex
problem-solving task, a group of users were asked to perform a
spreadsheet task, alternating voice and keyboard input. A total of 40
tasks were performed by each participant, the first thirty in a group
(over several days), the remaining ones a month later. The voice
spreadsheet program used in this study was extensively instrumented
to provide detailed information about the components of the inter-
action. These data, as well as analysis of the participants's ut-
terances and recognizer output, provide a fairly detailed picture of
spoken language interaction.

Although task completion by voice took longer than by keyboard,
analysis shows that users would be able to perform the spreadsheet
task faster by voice, if two key criteria could be met recognition
occurs in real-time, and the error rate is sufficiently low. This initial
experience with a spoken language system also allows us to identify
several metrics, beyond those traditionally associated with speech
recognition, that can be used to characterize system performance.

Citation Information
Alexander I Rudnicky, Michelle Sakamoto and Joseph H Polifroni. "Spoken Language Interaction in a Goal-Directed Task" Proceedings of the Intl. Conf. on Acoustics, Speech, and Signal Processing Vol. 1 (1990) p. 45 - 48
Available at: