Word Sense Disambiguation

147 papers with code • 15 benchmarks • 15 datasets

The task of Word Sense Disambiguation (WSD) consists of associating words in context with their most suitable entry in a pre-defined sense inventory. The de-facto sense inventory for English in WSD is WordNet.. For example, given the word “mouse” and the following sentence:

“A mouse consists of an object held in one's hand, with one or more buttons.”

we would assign “mouse” with its electronic device sense (the 4th sense in the WordNet sense inventory).

Libraries

Use these libraries to find Word Sense Disambiguation models and implementations
4 papers
95
2 papers
1,151

Most implemented papers

Language Models are Few-Shot Learners

openai/gpt-3 NeurIPS 2020

By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do.

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

huggingface/transformers arXiv 2019

Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP).

DeBERTa: Decoding-enhanced BERT with Disentangled Attention

microsoft/DeBERTa ICLR 2021

Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks.

FlauBERT: Unsupervised Language Model Pre-training for French

getalp/Flaubert LREC 2020

Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks.

An Incremental Parser for Abstract Meaning Representation

mdtux89/amr-evaluation EACL 2017

We describe a transition-based parser for AMR that parses sentences left-to-right, in linear time.

GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge

HSLCY/GlossBERT IJCNLP 2019

Word Sense Disambiguation (WSD) aims to find the exact sense of an ambiguous word in a particular context.

Scaling Language Models: Methods, Analysis & Insights from Training Gopher

allenai/dolma NA 2021

Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.

ST-MoE: Designing Stable and Transferable Sparse Expert Models

tensorflow/mesh 17 Feb 2022

But advancing the state-of-the-art across a broad set of natural language tasks has been hindered by training instabilities and uncertain quality during fine-tuning.

Hungry Hungry Hippos: Towards Language Modeling with State Space Models

hazyresearch/h3 28 Dec 2022

First, we use synthetic language modeling tasks to understand the gap between SSMs and attention.