We present a novel neural network model that learns POS tagging and graph-based dependency parsing jointly.
#4 best model for Part-Of-Speech Tagging on UD
Particularly, our model is a three-layer neural network that learns to encode the nonpivot features of an input example into a low-dimensional representation, so that the existence of pivot features (features that are prominent in both domains and convey useful information for the NLP task) in the example can be decoded from that representation.
This paper explores a divisive hierarchical clustering algorithm based on the well-known Obligatory Contour Principle in phonology.
We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech.
In the all treebanks category (LAS and UAS) we ranked 16th and 12th.
We show that relation extraction can be reduced to answering simple reading comprehension questions, by associating one or more natural-language questions with each relation slot.
However, when automatically predicted part-of-speech tags are provided as input, it substantially outperforms all previous local models and approaches the best reported results on the English CoNLL-2009 dataset.