Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.
( Image credit: Dynamic Word Embedding for Evolving Semantic Discovery )
Adversarial training provides a means of regularizing supervised learning algorithms while virtual adversarial training is able to extend supervised learning algorithms to the semi-supervised setting.
Ranked #12 on Sentiment Analysis on IMDb
Recent advances in language modeling using recurrent neural networks have made it viable to model language as distributions over characters.
Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance.
Ranked #17 on Named Entity Recognition on Ontonotes v5 (English)
Our approach decouples learning the transformation from the source language to the target language into (a) learning rotations for language-specific embeddings to align them to a common space, and (b) learning a similarity metric in the common space to model similarities between the embeddings.
Analogical reasoning is effective in capturing linguistic regularities.
Named Entity Recognition (NER) is one of the most common tasks of the natural language processing.