Word Sense Disambiguation

143 papers with code • 15 benchmarks • 15 datasets

The task of Word Sense Disambiguation (WSD) consists of associating words in context with their most suitable entry in a pre-defined sense inventory. The de-facto sense inventory for English in WSD is WordNet. For example, given the word “mouse” and the following sentence:

“A mouse consists of an object held in one's hand, with one or more buttons.”

we would assign “mouse” with its electronic device sense (the 4th sense in the WordNet sense inventory).

Libraries

Use these libraries to find Word Sense Disambiguation models and implementations

Latest papers with no code

Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE

no code yet • 4 Dec 2022

This technical report briefly describes our JDExplore d-team's Vega v2 submission on the SuperGLUE leaderboard.

Knowledge-in-Context: Towards Knowledgeable Semi-Parametric Language Models

no code yet • 28 Oct 2022

In this paper, we develop a novel semi-parametric language model architecture, Knowledge-in-Context (KiC), which empowers a parametric text-to-text language model with a knowledge-rich external memory.

On the Curious Case of $\ell_2$ norm of Sense Embeddings

no code yet • 26 Oct 2022

We show that the $\ell_2$ norm of a static sense embedding encodes information related to the frequency of that sense in the training corpus used to learn the sense embeddings.

Temporal Word Meaning Disambiguation using TimeLMs

no code yet • 15 Oct 2022

This paper is an effort in this direction, where we explore methods for word sense disambiguation for the EvoNLP shared task.

Lost in Context? On the Sense-wise Variance of Contextualized Word Embeddings

no code yet • 20 Aug 2022

We quantify how much the contextualized embeddings of each word sense vary across contexts in typical pre-trained models.

Part-of-Speech Tagging of Odia Language Using statistical and Deep Learning-Based Approaches

no code yet • 7 Jul 2022

The deep learning-based model includes Bi-LSTM network, CNN network, CRF layer, character sequence information, and pre-trained word vector.

Rationale-Augmented Ensembles in Language Models

no code yet • 2 Jul 2022

Recent research has shown that rationales, or step-by-step chains of thought, can be used to improve performance in multi-step reasoning tasks.

ArabGlossBERT: Fine-Tuning BERT on Context-Gloss Pairs for WSD

no code yet • RANLP 2021

First, we constructed a dataset of labeled Arabic context-gloss pairs (~167k pairs) we extracted from the Arabic Ontology and the large lexicographic database available at Birzeit University.

Topological Data Analysis for Word Sense Disambiguation

no code yet • 1 Mar 2022

We develop and test a novel unsupervised algorithm for word sense induction and disambiguation which uses topological data analysis.

Document Classification with Word Sense Knowledge

no code yet • ACL ARR January 2022

The performance of Word Sense Disambiguation (WSD) on a standard evaluation framework has reached an estimated upper bound.