Improved Word Sense Disambiguation with Enhanced Sense Representations

Current state-of-the-art supervised word sense disambiguation (WSD) systems (such as GlossBERT and bi-encoder model) yield surprisingly good results by purely leveraging pre-trained language models and short dictionary definitions (or glosses) of the different word senses. While concise and intuitive, the sense gloss is just one of many ways to provide information about word senses. In this paper, we focus on enhancing the sense representations via incorporating synonyms, example phrases or sentences showing usage of word senses, and sense gloss of hypernyms. We show that incorporating such additional information boosts the performance on WSD. With the proposed enhancements, our system achieves an F1 score of 82.0% on the standard benchmark test dataset of the English all-words WSD task, surpassing all previous published scores on this benchmark dataset.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Word Sense Disambiguation Supervised: ESR+WNGC Senseval 2 82.5 # 2
Senseval 3 80.2 # 2
SemEval 2007 78.5 # 1
SemEval 2013 82.3 # 3
SemEval 2015 85.3 # 2
Word Sense Disambiguation Supervised: ESR Senseval 2 81.3 # 5
Senseval 3 79.9 # 3
SemEval 2007 77.0 # 4
SemEval 2013 81.5 # 5
SemEval 2015 84.1 # 4

Methods


No methods listed for this paper. Add relevant methods here