Skip to main content
Unpublished Paper
Learning Reduplication with a Neural Network that Lacks Explicit Variables
(2021)
  • Brandon Prickett
  • Aaron Traylor, Brown University
  • Joe Pater
Abstract
Reduplicative linguistic patterns have been used as evidence for explicit algebraic variables in models of cognition. Here we show that a variable-free neural network can model these patterns in a way that predicts observed human behavior. Specifically, we successfully simulate the three experiments presented by Marcus et al. (1999), as well as Endress et al.’s (2007) partial replication of one of those experiments. We then explore the model’s ability to generalize reduplicative mappings to different kinds of novel inputs. Using Berent’s (2013) scopes of generalization as a metric, we find that the model matches the scope of generalization that has been observed in humans. We argue that these results challenge past claims about the necessity for symbolic variables in models of cognition. 
Keywords
  • Neural networks,
  • reduplication
Publication Date
March 29, 2021
Citation Information
Brandon Prickett, Aaron Traylor and Joe Pater. "Learning Reduplication with a Neural Network that Lacks Explicit Variables" (2021)
Available at: http://works.bepress.com/joe_pater/38/