Word Sense Disambiguation
143 papers with code • 15 benchmarks • 15 datasets
The task of Word Sense Disambiguation (WSD) consists of associating words in context with their most suitable entry in a pre-defined sense inventory. The de-facto sense inventory for English in WSD is WordNet. For example, given the word “mouse” and the following sentence:
“A mouse consists of an object held in one's hand, with one or more buttons.”
we would assign “mouse” with its electronic device sense (the 4th sense in the WordNet sense inventory).
Libraries
Use these libraries to find Word Sense Disambiguation models and implementationsDatasets
Latest papers with no code
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
This technical report briefly describes our JDExplore d-team's Vega v2 submission on the SuperGLUE leaderboard.
Knowledge-in-Context: Towards Knowledgeable Semi-Parametric Language Models
In this paper, we develop a novel semi-parametric language model architecture, Knowledge-in-Context (KiC), which empowers a parametric text-to-text language model with a knowledge-rich external memory.
On the Curious Case of $\ell_2$ norm of Sense Embeddings
We show that the $\ell_2$ norm of a static sense embedding encodes information related to the frequency of that sense in the training corpus used to learn the sense embeddings.
Temporal Word Meaning Disambiguation using TimeLMs
This paper is an effort in this direction, where we explore methods for word sense disambiguation for the EvoNLP shared task.
Lost in Context? On the Sense-wise Variance of Contextualized Word Embeddings
We quantify how much the contextualized embeddings of each word sense vary across contexts in typical pre-trained models.
Part-of-Speech Tagging of Odia Language Using statistical and Deep Learning-Based Approaches
The deep learning-based model includes Bi-LSTM network, CNN network, CRF layer, character sequence information, and pre-trained word vector.
Rationale-Augmented Ensembles in Language Models
Recent research has shown that rationales, or step-by-step chains of thought, can be used to improve performance in multi-step reasoning tasks.
ArabGlossBERT: Fine-Tuning BERT on Context-Gloss Pairs for WSD
First, we constructed a dataset of labeled Arabic context-gloss pairs (~167k pairs) we extracted from the Arabic Ontology and the large lexicographic database available at Birzeit University.
Topological Data Analysis for Word Sense Disambiguation
We develop and test a novel unsupervised algorithm for word sense induction and disambiguation which uses topological data analysis.
Document Classification with Word Sense Knowledge
The performance of Word Sense Disambiguation (WSD) on a standard evaluation framework has reached an estimated upper bound.