The task of Word Sense Disambiguation (WSD) consists of associating words in context with their most suitable entry in a pre-defined sense inventory. The de-facto sense inventory for English in WSD is WordNet. For example, given the word “mouse” and the following sentence:
“A mouse consists of an object held in one's hand, with one or more buttons.”
we would assign “mouse” with its electronic device sense (the 4th sense in the WordNet sense inventory).
|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
The computation of distance measures between nodes in graphs is inefficient and does not scale to large graphs.
We approach all the subtasks by applying a graph clustering algorithm on contextualized embedding representations of the verbs and arguments.
We critically assess mainstream accounting and finance research applying methods from computational linguistics (CL) to study financial discourse.
In this paper, we present our method of using fixed-size ordinally forgetting encoding (FOFE) to solve the word sense disambiguation (WSD) problem.
A word having multiple senses in a text introduces the lexical semantic task to find out which particular sense is appropriate for the given context.
Our method leads to state of the art results on most WSD evaluation tasks, while improving the coverage of supervised systems, reducing the training time and the size of the models, without additional training data.