Learning Word Embeddings

21 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

WordRank: Learning Word Embeddings via Robust Ranking

shihaoji/wordrank EMNLP 2016

Then, based on this insight, we propose a novel framework WordRank that efficiently estimates word representations via robust ranking, in which the attention mechanism and robustness to noise are readily achieved via the DCG-like ranking losses.

Speech2Vec: A Sequence-to-Sequence Framework for Learning Word Embeddings from Speech

my-yy/s2v_rc 23 Mar 2018

In this paper, we propose a novel deep neural network architecture, Speech2Vec, for learning fixed-length vector representations of audio segments excised from a speech corpus, where the vectors contain semantic information pertaining to the underlying spoken words, and are close to other vectors in the embedding space if their corresponding underlying spoken words are semantically similar.

Skip-gram word embeddings in hyperbolic space

lateral/minkowski 30 Aug 2018

Recent work has demonstrated that embeddings of tree-like graphs in hyperbolic space surpass their Euclidean counterparts in performance by a large margin.

Neural Graph Embedding Methods for Natural Language Processing

malllabiisc/ConfGCN 8 Nov 2019

Knowledge graphs are structured representations of facts in a graph, where nodes represent entities and edges represent relationships between them.

One Embedder, Any Task: Instruction-Finetuned Text Embeddings

HKUNLP/instructor-embedding 19 Dec 2022

Our analysis suggests that INSTRUCTOR is robust to changes in instructions, and that instruction finetuning mitigates the challenge of training a single model on diverse datasets.

The Mixing method: low-rank coordinate descent for semidefinite programming with diagonal constraints

locuslab/mixing 1 Jun 2017

In this paper, we propose a low-rank coordinate descent approach to structured semidefinite programming with diagonal constraints.

Dict2vec : Learning Word Embeddings using Lexical Dictionaries

tca19/dict2vec EMNLP 2017

Learning word embeddings on large unlabeled corpus has been shown to be successful in improving many natural language tasks.

Grammatical Error Detection Using Error- and Grammaticality-Specific Word Embeddings

kanekomasahiro/grammatical-error-detection IJCNLP 2017

In this study, we improve grammatical error detection by learning word embeddings that consider grammaticality and error patterns.

MIPA: Mutual Information Based Paraphrase Acquisition via Bilingual Pivoting

tmu-nlp/pmi-ppdb IJCNLP 2017

We present a pointwise mutual information (PMI)-based approach to formalize paraphrasability and propose a variant of PMI, called MIPA, for the paraphrase acquisition.