Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.
Adversarial training provides a means of regularizing supervised learning algorithms while virtual adversarial training is able to extend supervised learning algorithms to the semi-supervised setting.
#4 best model for Sentiment Analysis on IMDb
Recent advances in language modeling using recurrent neural networks have made it viable to model language as distributions over characters.
SOTA for Chunking on Penn Treebank
Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance.
Analogical reasoning is effective in capturing linguistic regularities.
Word embeddings are a popular approach to unsupervised learning of word relationships that are widely used in natural language processing.
Distributed dense word vectors have been shown to be effective at capturing token-level semantic and syntactic regularities in language, while topic models can form interpretable representations over documents.