Search Results for author: Olivier Tieleman

Found 8 papers, 3 papers with code

A Factorial Mixture Prior for Compositional Deep Generative Models

no code implementations18 Dec 2018 Ulrich Paquet, Sumedh K. Ghaisas, Olivier Tieleman

We assume that a high-dimensional datum, like an image, is a compositional expression of a set of properties, with a complicated non-linear relationship between the datum and its properties.

Variational Inference

Shaping representations through communication: community size effect in artificial learning systems

no code implementations12 Dec 2019 Olivier Tieleman, Angeliki Lazaridou, Shibl Mourad, Charles Blundell, Doina Precup

Motivated by theories of language and communication that explain why communities with large numbers of speakers have, on average, simpler languages with more regularity, we cast the representation learning problem in terms of learning to communicate.

Representation Learning

Never Give Up: Learning Directed Exploration Strategies

6 code implementations ICLR 2020 Adrià Puigdomènech Badia, Pablo Sprechmann, Alex Vitvitskyi, Daniel Guo, Bilal Piot, Steven Kapturowski, Olivier Tieleman, Martín Arjovsky, Alexander Pritzel, Andew Bolt, Charles Blundell

Our method doubles the performance of the base agent in all hard exploration in the Atari-57 suite while maintaining a very high score across the remaining games, obtaining a median human normalised score of 1344. 0%.

Atari Games

Multi-agent Communication meets Natural Language: Synergies between Functional and Structural Language Learning

no code implementations ACL 2020 Angeliki Lazaridou, Anna Potapenko, Olivier Tieleman

We present a method for combining multi-agent communication and traditional data-driven approaches to natural language learning, with an end goal of teaching agents to communicate with humans in natural language.

Language Modelling

Grounded Language Learning Fast and Slow

2 code implementations ICLR 2021 Felix Hill, Olivier Tieleman, Tamara von Glehn, Nathaniel Wong, Hamza Merzic, Stephen Clark

Recent work has shown that large text-based neural language models, trained with conventional supervised learning objectives, acquire a surprising propensity for few- and one-shot learning.

Grounded language learning Meta-Learning +1

Formalising Concepts as Grounded Abstractions

no code implementations13 Jan 2021 Stephen Clark, Alexander Lerchner, Tamara von Glehn, Olivier Tieleman, Richard Tanburn, Misha Dashevskiy, Matko Bosnjak

The mathematics of partial orders and lattices is a standard tool for modelling conceptual spaces (Ch. 2, Mitchell (1997), Ganter and Obiedkov (2016)); however, there is no formal work that we are aware of which defines a conceptual lattice on top of a representation that is induced using unsupervised deep learning (Goodfellow et al., 2016).

Representation Learning

Large-Scale Retrieval for Reinforcement Learning

no code implementations10 Jun 2022 Peter C. Humphreys, Arthur Guez, Olivier Tieleman, Laurent SIfre, Théophane Weber, Timothy Lillicrap

Effective decision making involves flexibly relating past experiences and relevant contextual information to a novel situation.

Decision Making Offline RL +3

Cannot find the paper you are looking for? You can Submit a new open access paper.