Search Results for author: Anna Potapenko

Found 5 papers, 2 papers with code

Compressive Transformers for Long-Range Sequence Modelling

6 code implementations ICLR 2020 Jack W. Rae, Anna Potapenko, Siddhant M. Jayakumar, Timothy P. Lillicrap

We present the Compressive Transformer, an attentive sequence model which compresses past memories for long-range sequence learning.

Language Modelling

Interpretable probabilistic embeddings: bridging the gap between topic models and neural networks

no code implementations11 Nov 2017 Anna Potapenko, Artem Popov, Konstantin Vorontsov

We consider probabilistic topic models and more recent word embedding techniques from a perspective of learning hidden semantic representations.

Topic Models Word Similarity

Learning and Evaluating Sparse Interpretable Sentence Embeddings

no code implementations WS 2018 Valentin Trifonov, Octavian-Eugen Ganea, Anna Potapenko, Thomas Hofmann

Previous research on word embeddings has shown that sparse representations, which can be either learned on top of existing dense embeddings or obtained through model constraints during training time, have the benefit of increased interpretability properties: to some degree, each dimension can be understood by a human and associated with a recognizable feature in the data.

Sentence Sentence Embedding +2

Multi-agent Communication meets Natural Language: Synergies between Functional and Structural Language Learning

no code implementations ACL 2020 Angeliki Lazaridou, Anna Potapenko, Olivier Tieleman

We present a method for combining multi-agent communication and traditional data-driven approaches to natural language learning, with an end goal of teaching agents to communicate with humans in natural language.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.