Unsupervised, Knowledge-Free, and Interpretable Word Sense Disambiguation

EMNLP 2017 Alexander PanchenkoFide MartenEugen RuppertStefano FaralliDmitry UstalovSimone Paolo PonzettoChris Biemann

Interpretability of a predictive model is a powerful feature that gains the trust of users in the correctness of the predictions. In word sense disambiguation (WSD), knowledge-based systems tend to be much more interpretable than knowledge-free counterparts as they rely on the wealth of manually-encoded elements representing word senses, such as hypernyms, usage examples, and images... (read more)

PDF Abstract

Evaluation results from the paper


  Submit results from this paper to get state-of-the-art GitHub badges and help community compare results to other papers.