Browse > Natural Language Processing > Word Sense Induction

Word Sense Induction

10 papers with code ยท Natural Language Processing

State-of-the-art leaderboards

Trend Dataset Best Method Paper title Paper Code Compare

Latest papers without code

Vector representations of text data in deep learning

7 Jan 2019

For document-level representations we propose Binary Paragraph Vector: a neural network models for learning binary representations of text documents, which can be used for fast document retrieval.

INFORMATION RETRIEVAL PART-OF-SPEECH TAGGING TRANSFER LEARNING WORD EMBEDDINGS WORD SENSE INDUCTION

Word Sense Induction with Neural biLM and Symmetric Patterns

EMNLP 2018

An established method for Word Sense Induction (WSI) uses a language model to predict probable substitutes for target words, and induces senses by clustering these resulting substitute vectors.

LANGUAGE MODELLING SEMANTIC TEXTUAL SIMILARITY WORD SENSE DISAMBIGUATION WORD SENSE INDUCTION

Efficient Graph-based Word Sense Induction by Distributional Inclusion Vector Embeddings

WS 2018

Word sense induction (WSI), which addresses polysemy by unsupervised discovery of multiple word senses, resolves ambiguities for downstream NLP tasks and also makes word representations more interpretable.

PART-OF-SPEECH TAGGING RELATION EXTRACTION SENTIMENT ANALYSIS WORD SENSE DISAMBIGUATION WORD SENSE INDUCTION

How much does a word weigh? Weighting word embeddings for word sense induction

23 May 2018

The paper describes our participation in the first shared task on word sense induction and disambiguation for the Russian language RUSSE'2018 (Panchenko et al., 2018).

MACHINE TRANSLATION WORD EMBEDDINGS WORD SENSE INDUCTION

Efficient Graph-based Word Sense Induction by Distributional Inclusion Vector Embeddings

WS 2018

Word sense induction (WSI), which addresses polysemy by unsupervised discovery of multiple word senses, resolves ambiguities for downstream NLP tasks and also makes word representations more interpretable.

WORD SENSE INDUCTION