Search Results for author: Christos Kaplanis

Found 9 papers, 4 papers with code

Improving fine-grained understanding in image-text pre-training

no code implementations18 Jan 2024 Ioana Bica, Anastasija Ilić, Matthias Bauer, Goker Erdogan, Matko Bošnjak, Christos Kaplanis, Alexey A. Gritsenko, Matthias Minderer, Charles Blundell, Razvan Pascanu, Jovana Mitrović

We introduce SPARse Fine-grained Contrastive Alignment (SPARC), a simple method for pretraining more fine-grained multimodal representations from image-text pairs.

object-detection Object Detection

Ensembles and Encoders for Task-Free Continual Learning

no code implementations29 Sep 2021 Murray Shanahan, Christos Kaplanis, Jovana Mitrović

We present an architecture that is effective for continual learning in an especially demanding setting, where task boundaries do not exist or are unknown, and where classes have to be learned online (with each presented only once).

Continual Learning Self-Supervised Learning

Encoders and Ensembles for Task-Free Continual Learning

no code implementations27 May 2021 Murray Shanahan, Christos Kaplanis, Jovana Mitrović

We present an architecture that is effective for continual learning in an especially demanding setting, where task boundaries do not exist or are unknown, and where classes have to be learned online (with each example presented only once).

Continual Learning Image Classification +1

Continual Reinforcement Learning with Multi-Timescale Replay

1 code implementation16 Apr 2020 Christos Kaplanis, Claudia Clopath, Murray Shanahan

In this paper, we propose a multi-timescale replay (MTR) buffer for improving continual learning in RL agents faced with environments that are changing continuously over time at timescales that are unknown to the agent.

Continual Learning Continuous Control +2

An Explicitly Relational Neural Network Architecture

2 code implementations ICML 2020 Murray Shanahan, Kyriacos Nikiforou, Antonia Creswell, Christos Kaplanis, David Barrett, Marta Garnelo

With a view to bridging the gap between deep learning and symbolic AI, we present a novel end-to-end neural network architecture that learns to form propositional representations with an explicitly relational structure from raw pixel data.

Relational Reasoning

Policy Consolidation for Continual Reinforcement Learning

1 code implementation1 Feb 2019 Christos Kaplanis, Murray Shanahan, Claudia Clopath

We propose a method for tackling catastrophic forgetting in deep reinforcement learning that is \textit{agnostic} to the timescale of changes in the distribution of experiences, does not require knowledge of task boundaries, and can adapt in \textit{continuously} changing environments.

Continual Learning Continuous Control +2

Continual Reinforcement Learning with Complex Synapses

no code implementations ICML 2018 Christos Kaplanis, Murray Shanahan, Claudia Clopath

Unlike humans, who are capable of continual learning over their lifetimes, artificial neural networks have long been known to suffer from a phenomenon known as catastrophic forgetting, whereby new learning can lead to abrupt erasure of previously acquired knowledge.

Continual Learning reinforcement-learning +1

Feature Control as Intrinsic Motivation for Hierarchical Reinforcement Learning

1 code implementation18 May 2017 Nat Dilokthanakul, Christos Kaplanis, Nick Pawlowski, Murray Shanahan

We highlight the advantage of our approach in one of the hardest games -- Montezuma's revenge -- for which the ability to handle sparse rewards is key.

Hierarchical Reinforcement Learning Montezuma's Revenge +2

Cannot find the paper you are looking for? You can Submit a new open access paper.