# Word Embeddings Edit

355 papers with code · Methodology

Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers.

No evaluation results yet. Help compare methods by submit evaluation metrics.

# Adversarial Training Methods for Semi-Supervised Text Classification

25 May 2016tensorflow/models

Adversarial training provides a means of regularizing supervised learning algorithms while virtual adversarial training is able to extend supervised learning algorithms to the semi-supervised setting.

56,776

# FastText.zip: Compressing text classification models

We consider the problem of producing compact architectures for text classification, such that the full model fits in a limited amount of memory.

19,350

# Enriching Word Vectors with Subword Information

A vector representation is associated to each character $n$-gram; words being represented as the sum of these representations.

19,350

# Contextual String Embeddings for Sequence Labeling

Recent advances in language modeling using recurrent neural networks have made it viable to model language as distributions over characters.

7,002

# Named Entity Recognition with Bidirectional LSTM-CNNs

Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance.

7,002

# Analogical Reasoning on Chinese Morphological and Semantic Relations

Analogical reasoning is effective in capturing linguistic regularities.

5,563

# StarSpace: Embed All The Things!

A framework for training and evaluating AI models on a variety of openly available dialogue datasets.

4,768

# Clinical Concept Embeddings Learned from Massive Sources of Multimodal Medical Data

4 Apr 2018beamandrew/medical-data

Word embeddings are a popular approach to unsupervised learning of word relationships that are widely used in natural language processing.

3,684

# Application of a Hybrid Bi-LSTM-CRF model to the task of Russian Named Entity Recognition

27 Sep 2017deepmipt/DeepPavlov

Named Entity Recognition (NER) is one of the most common tasks of the natural language processing.

3,371

# Mixing Dirichlet Topic Models and Word Embeddings to Make lda2vec

6 May 2016cemoody/lda2vec

Distributed dense word vectors have been shown to be effective at capturing token-level semantic and syntactic regularities in language, while topic models can form interpretable representations over documents.

2,532