How do we get there? Evaluating transformer neural networks as cognitive models for English past tense inflection

ACL ARR November 2021  ·  Anonymous ·

Neural network models have achieved good performance on morphological inflection tasks, including English past tense inflection. However whether they can represent human cognitive mechanisms is still under debate. In this work, we examined transformer models with different training size to show that: 1) neural models correlate with both human behaviors and cognitive theories' predictions on nonce verbs; and the model with small-size training data that matches parents' input distribution has the highest correlation; 2) neural models make different types of errors on regular and irregular verbs, exhibiting a clear distinction between regulars and irregulars. Therefore, we conclude that neural networks have the potential to be good cognitive models for English past tense.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here