Word Sense Disambiguation
142 papers with code • 15 benchmarks • 15 datasets
The task of Word Sense Disambiguation (WSD) consists of associating words in context with their most suitable entry in a pre-defined sense inventory. The de-facto sense inventory for English in WSD is WordNet. For example, given the word “mouse” and the following sentence:
“A mouse consists of an object held in one's hand, with one or more buttons.”
we would assign “mouse” with its electronic device sense (the 4th sense in the WordNet sense inventory).
Libraries
Use these libraries to find Word Sense Disambiguation models and implementationsDatasets
Latest papers with no code
Driving Context into Text-to-Text Privatization
\textit{Metric Differential Privacy} enables text-to-text privatization by adding calibrated noise to the vector of a word derived from an embedding space and projecting this noisy vector back to a discrete vocabulary using a nearest neighbor search.
Translate to Disambiguate: Zero-shot Multilingual Word Sense Disambiguation with Pretrained Language Models
To better understand this contrast, we present a new study investigating how well PLMs capture cross-lingual word sense with Contextual Word-Level Translation (C-WLT), an extension of word-level translation that prompts the model to translate a given word in context.
OPI at SemEval 2023 Task 1: Image-Text Embeddings and Multimodal Information Retrieval for Visual Word Sense Disambiguation
The goal of visual word sense disambiguation is to find the image that best matches the provided description of the word's meaning.
A Simple and Effective Method of Cross-Lingual Plagiarism Detection
We present a simple cross-lingual plagiarism detection method applicable to a large number of languages.
What do Language Models know about word senses? Zero-Shot WSD with Language Models and Domain Inventories
Language Models are the core for almost any Natural Language Processing system nowadays.
A Semantic Approach to Negation Detection and Word Disambiguation with Natural Language Processing
This study aims to demonstrate the methods for detecting negations in a sentence by uniquely evaluating the lexical structure of the text via word-sense disambiguation.
A Cohesive Distillation Architecture for Neural Language Models
We developed two methods to test our hypothesis that efficient architectures can gain knowledge from LMs and extract valuable information from lexical sources.
Metaphorical Polysemy Detection: Conventional Metaphor meets Word Sense Disambiguation
Additionally, when paired with a WSD model, our approach outperforms a state-of-the-art metaphor detection model at identifying conventional metaphors in text (. 659 F1 compared to . 626).
Using Two Losses and Two Datasets Simultaneously to Improve TempoWiC Accuracy
WSD (Word Sense Disambiguation) is the task of identifying which sense of a word is meant in a sentence or other segment of text.
SMSMix: Sense-Maintained Sentence Mixup for Word Sense Disambiguation
To the best of our knowledge, this is the first attempt to apply mixup in NLP while preserving the meaning of a specific word.