# Word Similarity

100 papers with code • 0 benchmarks • 1 datasets

Calculate a numerical score for the semantic similarity between two words.

## Libraries

Use these libraries to find Word Similarity models and implementations
2 papers
2,420
2 papers
1,155

# Efficient Estimation of Word Representations in Vector Space

16 Jan 2013

We propose two novel model architectures for computing continuous vector representations of words from very large data sets.

71

# Enriching Word Vectors with Subword Information

A vector representation is associated to each character $n$-gram; words being represented as the sum of these representations.

48

# Calculating the similarity between words and sentences using a lexical database and corpus statistics

To calculate the semantic similarity between words and sentences, the proposed method follows an edge-based approach using a lexical database.

5

# All-but-the-Top: Simple and Effective Postprocessing for Word Representations

The postprocessing is empirically validated on a variety of lexical-level intrinsic tasks (word similarity, concept categorization, word analogy) and sentence-level tasks (semantic textural similarity and { text classification}) on multiple datasets and with a variety of representation methods and hyperparameter choices in multiple languages; in each case, the processed representations are consistently better than the original ones.

4

# How to evaluate word embeddings? On importance of data efficiency and simple supervised tasks

7 Feb 2017

Maybe the single most important goal of representation learning is making subsequent learning faster.

4

# Unsupervised Multilingual Word Embeddings

Multilingual Word Embeddings (MWEs) represent words from multiple languages in a single distributional vector space.

3

# WordRank: Learning Word Embeddings via Robust Ranking

Then, based on this insight, we propose a novel framework WordRank that efficiently estimates word representations via robust ranking, in which the attention mechanism and robustness to noise are readily achieved via the DCG-like ranking losses.

2

# Definition Modeling: Learning to define word embeddings in natural language

1 Dec 2016

Distributed representations of words have been shown to capture lexical semantics, as demonstrated by their effectiveness in word similarity and analogical relation tasks.

2

# Construction of a Japanese Word Similarity Dataset

An evaluation of distributed word representation is generally conducted using a word similarity task and/or a word analogy task.

2

# ConceptNet at SemEval-2017 Task 2: Extending Word Embeddings with Multilingual Relational Knowledge

This paper describes Luminoso's participation in SemEval 2017 Task 2, "Multilingual and Cross-lingual Semantic Word Similarity", with a system based on ConceptNet.

2