Search Results for author: Alex Panchenko

Found 17 papers, 2 papers with code

Combining Lexical Substitutes in Neural Word Sense Induction

no code implementations RANLP 2019 Nikolay Arefyev, Boris Sheludko, Alex Panchenko, er

Word Sense Induction (WSI) is the task of grouping of occurrences of an ambiguous word according to their meaning.

Clustering Word Sense Induction

A Dataset for Noun Compositionality Detection for a Slavic Language

1 code implementation WS 2019 Dmitry Puzyrev, Artem Shelmanov, Alex Panchenko, er, Ekaterina Artemova

This paper presents the first gold-standard resource for Russian annotated with compositionality information of noun compounds.

On the Compositionality Prediction of Noun Phrases using Poincar\'e Embeddings

no code implementations ACL 2019 Abhik Jana, Dima Puzyrev, Alex Panchenko, er, Pawan Goyal, Chris Biemann, Animesh Mukherjee

In particular, we use hypernymy information of the multiword and its constituents encoded in the form of the recently introduced Poincar{\'e} embeddings in addition to the distributional information to detect compositionality for noun phrases.

Improving Neural Entity Disambiguation with Graph Embeddings

no code implementations ACL 2019 {\"O}zge Sevgili, Alex Panchenko, er, Chris Biemann

Entity Disambiguation (ED) is the task of linking an ambiguous entity mention to a corresponding entry in a knowledge base.

Entity Disambiguation

TARGER: Neural Argument Mining at Your Fingertips

1 code implementation ACL 2019 Artem Chernodub, Oleksiy Oliynyk, Philipp Heidenreich, Alex Bondarenko, Matthias Hagen, Chris Biemann, Alex Panchenko, er

We present TARGER, an open source neural argument mining framework for tagging arguments in free input texts and for keyword-based retrieval of arguments from an argument-tagged web-scale corpus.

Argument Mining Retrieval

Using Linked Disambiguated Distributional Networks for Word Sense Disambiguation

no code implementations WS 2017 Alex Panchenko, er, Stefano Faralli, Simone Paolo Ponzetto, Chris Biemann

We introduce a new method for unsupervised knowledge-based word sense disambiguation (WSD) based on a resource that links two types of sense-aware lexical networks: one is induced from a corpus using distributional semantics, the other is manually constructed.

Machine Translation Translation +2

Unsupervised Does Not Mean Uninterpretable: The Case for Word Sense Induction and Disambiguation

no code implementations EACL 2017 Alex Panchenko, er, Eugen Ruppert, Stefano Faralli, Simone Paolo Ponzetto, Chris Biemann

On the example of word sense induction and disambiguation (WSID), we show that it is possible to develop an interpretable model that matches the state-of-the-art models in accuracy.

Word Embeddings Word Sense Induction

Best of Both Worlds: Making Word Sense Embeddings Interpretable

no code implementations LREC 2016 Alex Panchenko, er

Word sense embeddings represent a word sense as a low-dimensional numeric vector.

Cannot find the paper you are looking for? You can Submit a new open access paper.