Word Sense Disambiguation
147 papers with code • 15 benchmarks • 15 datasets
The task of Word Sense Disambiguation (WSD) consists of associating words in context with their most suitable entry in a pre-defined sense inventory. The de-facto sense inventory for English in WSD is WordNet.. For example, given the word “mouse” and the following sentence:
“A mouse consists of an object held in one's hand, with one or more buttons.”
we would assign “mouse” with its electronic device sense (the 4th sense in the WordNet sense inventory).
Libraries
Use these libraries to find Word Sense Disambiguation models and implementationsDatasets
Most implemented papers
Language Models are Few-Shot Learners
By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do.
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP).
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks.
FlauBERT: Unsupervised Language Model Pre-training for French
Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks.
Enhancing Interpretable Clauses Semantically using Pretrained Word Representation
The approach significantly enhances the performance and interpretability of TM.
An Incremental Parser for Abstract Meaning Representation
We describe a transition-based parser for AMR that parses sentences left-to-right, in linear time.
GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge
Word Sense Disambiguation (WSD) aims to find the exact sense of an ambiguous word in a particular context.
Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.
ST-MoE: Designing Stable and Transferable Sparse Expert Models
But advancing the state-of-the-art across a broad set of natural language tasks has been hindered by training instabilities and uncertain quality during fine-tuning.
Hungry Hungry Hippos: Towards Language Modeling with State Space Models
First, we use synthetic language modeling tasks to understand the gap between SSMs and attention.