Skip to main content
Article
75. Leveraging Linguistic Context in Dyadic Interactions to Improve Automatic Speech Recognition for Children.
Computer Speech and Language (2020)
  • Manoj Kumar, University of Southern California
  • So Hyun Kim, Weil Cornell Medicine
  • Catherine Lord, University of California, Los Angeles
  • Thomas D. Lyon, University of Southern California
  • Shrikanth Narayanan, University of Southern California
Abstract
Automatic speech recognition for child speech has been long considered a more challenging problem than for adult speech. Various contributing factors have been identified such as larger acoustic speech variability including mispronunciations due to continuing biological changes in growth, developing vocabulary and linguistic skills, and scarcity of training corpora. A further challenge arises when dealing with spontaneous speech of children involved in a conversational interaction, and especially when the child may have limited or impaired communication ability. This includes health applications, one of the motivating domains of this paper, that involve goal-oriented dyadic interactions between a child and clinician/adult social partner as a part of behavioral assessment. In this work, we use linguistic context information from the interaction to adapt speech recognition models for children speech. Specifically, spoken language from the interacting adult speech provides the context for the child’s speech. We propose two methods to exploit this context: lexical repetitions and semantic response generation. For the latter,
we make use of sequence-to-sequence models that learn to predict the target child utterance given context adult utterances. Long-term context is incorporated in the model by propagating the cell-state across the duration of conversation. We use interpolation techniques to adapt language models at the utterance level, and analyze the effect of length and direction of context (forward and backward). Two different domains are used in our experiments to demonstrate the generalized nature of our methods - interactions between a child with ASD and an adult social partner in a play-based, naturalistic setting, and in forensic interviews between a child and a trained interviewer. In both cases, context-adapted models yield significant improvement (up to 10.71% in absolute word error rate) over the baseline and perform consistently across context windows and directions. Using statistical analysis, we investigate the effect of source-based (adult) and target-based (child) factors on adaptation methods. Our results demonstrate the applicability of our modeling approach in improving child speech recognition by employing information transfer from the adult interlocutor.
Keywords
  • Child Speech,
  • Automatic Speech Recognition,
  • Autism Spectrum Disorder,
  • Forensic Interviews,
  • child abuse,
  • child sexual abuse,
  • child testimony
Publication Date
Spring April 21, 2020
DOI
10.1016/j.csl.2020.101101
Citation Information
Kumar, M., Kim, S. H., Lord, C., Lyon, T. D., & Narayanan, S. (2020). Leveraging Linguistic Context in Dyadic Interactions to Improve Automatic Speech Recognition for Children. Computer Speech & Language, 101101.