Skip to main content
Article
Seq2Seq Models with Dropout can Learn Generalizable Reduplication
Proceedings of SIGMORPHON (2018)
  • Brandon Prickett
  • Aaron Traylor, Brown University
  • Joe Pater
Abstract
Natural language reduplication can pose a challenge to neural models of language, and has been argued to require variables (Marcus et al., 1999). Sequence-to-sequence neural networks have been shown to perform well at a number of other morphological tasks (Cotterell et al., 2016), and produce results that highly correlate with human behavior (Kirov, 2017; Kirov & Cotterell, 2018) but do not include any explicit variables in their architecture. We find that they can learn a reduplicative pattern that generalizes to novel segments if they are trained with dropout (Srivastava et al., 2014). We argue that this matches the scope of generalization observed in human reduplication.
Keywords
  • Neural networks,
  • connectionism,
  • reduplication
Publication Date
Fall 2018
Citation Information
Brandon Prickett, Aaron Traylor and Joe Pater. "Seq2Seq Models with Dropout can Learn Generalizable Reduplication" Proceedings of SIGMORPHON (2018)
Available at: http://works.bepress.com/joe_pater/36/