ConSeC: Word Sense Disambiguation as Continuous Sense Comprehension

Supervised systems have nowadays become the standard recipe for Word Sense Disambiguation (WSD), with Transformer-based language models as their primary ingredient. However, while these systems have certainly attained unprecedented performances, virtually all of them operate under the constraining assumption that, given a context, each word can be disambiguated individually with no account of the other sense choices. To address this limitation and drop this assumption, we propose CONtinuous SEnse Comprehension (ConSeC), a novel approach to WSD: leveraging a recent re-framing of this task as a text extraction problem, we adapt it to our formulation and introduce a feedback loop strategy that allows the disambiguation of a target word to be conditioned not only on its context but also on the explicit senses assigned to nearby words. We evaluate ConSeC and examine how its components lead it to surpass all its competitors and set a new state of the art on English WSD. We also explore how ConSeC fares in the cross-lingual setting, focusing on 8 languages with various degrees of resource availability, and report significant improvements over prior systems. We release our code at

PDF Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Word Sense Disambiguation Supervised: ConSeC+WNGC Senseval 2 82.7 # 1
Senseval 3 81.0 # 1
SemEval 2007 78.5 # 1
SemEval 2013 85.2 # 1
SemEval 2015 87.5 # 1
Word Sense Disambiguation Supervised: ConSeC Senseval 2 82.3 # 3
Senseval 3 79.9 # 3
SemEval 2007 77.4 # 3
SemEval 2013 83.2 # 2
SemEval 2015 85.2 # 3


No methods listed for this paper. Add relevant methods here