A Challenge Set and Methods for Noun-Verb Ambiguity

English part-of-speech taggers regularly make egregious errors related to noun-verb ambiguity, despite having achieved 97{\%}+ accuracy on the WSJ Penn Treebank since 2002. These mistakes have been difficult to quantify and make taggers less useful to downstream tasks such as translation and text-to-speech synthesis. This paper creates a new dataset of over 30,000 naturally-occurring non-trivial examples of noun-verb ambiguity. Taggers within 1{\%} of each other when measured on the WSJ have accuracies ranging from 57{\%} to 75{\%} accuracy on this challenge set. Enhancing the strongest existing tagger with contextual word embeddings and targeted training data improves its accuracy to 89{\%}, a 14{\%} absolute (52{\%} relative) improvement. Downstream, using just this enhanced tagger yields a 28{\%} reduction in error over the prior best learned model for homograph disambiguation for textto-speech synthesis.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here