The task of Word Sense Disambiguation (WSD) consists of associating words in context with their most suitable entry in a pre-defined sense inventory. The de-facto sense inventory for English in WSD is WordNet. For example, given the word “mouse” and the following sentence:
“A mouse consists of an object held in one's hand, with one or more buttons.”
we would assign “mouse” with its electronic device sense (the 4th sense in the WordNet sense inventory).
|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
The key idea is to utilize word sememes to capture exact meanings of a word within specific contexts accurately.
Learning word embeddings on large unlabeled corpus has been shown to be successful in improving many natural language tasks.
GAS models the semantic relationship between the context and the gloss in an improved memory network framework, which breaks the barriers of the previous supervised methods and knowledge-based methods.
#2 best model for Word Sense Disambiguation on SemEval 2015 Task 13
In this demonstration we present SupWSD, a Java API for supervised Word Sense Disambiguation (WSD).
In word sense disambiguation (WSD), knowledge-based systems tend to be much more interpretable than knowledge-free counterparts as they rely on the wealth of manually-encoded elements representing word senses, such as hypernyms, usage examples, and images.