Search Results for author: Clare Lyle

Found 20 papers, 5 papers with code

Learning Dynamics and Generalization in Reinforcement Learning

no code implementations5 Jun 2022 Clare Lyle, Mark Rowland, Will Dabney, Marta Kwiatkowska, Yarin Gal

Solving a reinforcement learning (RL) problem poses two competing challenges: fitting a potentially discontinuous value function, and generalizing well to new observations.

Policy Gradient Methods reinforcement-learning

Understanding and Preventing Capacity Loss in Reinforcement Learning

no code implementations ICLR 2022 Clare Lyle, Mark Rowland, Will Dabney

The reinforcement learning (RL) problem is rife with sources of non-stationarity, making it a notoriously difficult problem domain for the application of neural networks.

Montezuma's Revenge reinforcement-learning

DARTS without a Validation Set: Optimizing the Marginal Likelihood

no code implementations24 Dec 2021 Miroslav Fil, Binxin Ru, Clare Lyle, Yarin Gal

The success of neural architecture search (NAS) has historically been limited by excessive compute requirements.

Neural Architecture Search

Self-Attention Between Datapoints: Going Beyond Individual Input-Output Pairs in Deep Learning

2 code implementations NeurIPS 2021 Jannik Kossen, Neil Band, Clare Lyle, Aidan N. Gomez, Tom Rainforth, Yarin Gal

We challenge a common assumption underlying most supervised deep learning: that a model makes a prediction depending only on its parameters and the features of a single input.

3D Part Segmentation

Provable Guarantees on the Robustness of Decision Rules to Causal Interventions

no code implementations19 May 2021 Benjie Wang, Clare Lyle, Marta Kwiatkowska

Robustness of decision rules to shifts in the data-generating process is crucial to the successful deployment of decision-making systems.

Decision Making

Robustness to Pruning Predicts Generalization in Deep Neural Networks

no code implementations10 Mar 2021 Lorenz Kuhn, Clare Lyle, Aidan N. Gomez, Jonas Rothfuss, Yarin Gal

Existing generalization measures that aim to capture a model's simplicity based on parameter counts or norms fail to explain generalization in overparameterized deep neural networks.

Resolving Causal Confusion in Reinforcement Learning via Robust Exploration

no code implementations ICLR Workshop SSL-RL 2021 Clare Lyle, Amy Zhang, Minqi Jiang, Joelle Pineau, Yarin Gal

To address this, we present a robust exploration strategy which enables causal hypothesis-testing by interaction with the environment.

reinforcement-learning

On The Effect of Auxiliary Tasks on Representation Dynamics

no code implementations25 Feb 2021 Clare Lyle, Mark Rowland, Georg Ostrovski, Will Dabney

While auxiliary tasks play a key role in shaping the representations learnt by reinforcement learning agents, much is still unknown about the mechanisms through which this is achieved.

reinforcement-learning

Unpacking Information Bottlenecks: Surrogate Objectives for Deep Learning

no code implementations1 Jan 2021 Andreas Kirsch, Clare Lyle, Yarin Gal

The Information Bottleneck principle offers both a mechanism to explain how deep neural networks train and generalize, as well as a regularized objective with which to train models.

Density Estimation

A Bayesian Perspective on Training Speed and Model Selection

no code implementations NeurIPS 2020 Clare Lyle, Lisa Schut, Binxin Ru, Yarin Gal, Mark van der Wilk

This provides two major insights: first, that a measure of a model's training speed can be used to estimate its marginal likelihood.

Model Selection

Revisiting the Train Loss: an Efficient Performance Estimator for Neural Architecture Search

no code implementations28 Sep 2020 Binxin Ru, Clare Lyle, Lisa Schut, Mark van der Wilk, Yarin Gal

Reliable yet efficient evaluation of generalisation performance of a proposed architecture is crucial to the success of neural architecture search (NAS).

Model Selection Neural Architecture Search

Speedy Performance Estimation for Neural Architecture Search

2 code implementations NeurIPS 2021 Binxin Ru, Clare Lyle, Lisa Schut, Miroslav Fil, Mark van der Wilk, Yarin Gal

Reliable yet efficient evaluation of generalisation performance of a proposed architecture is crucial to the success of neural architecture search (NAS).

Model Selection Neural Architecture Search

On the Benefits of Invariance in Neural Networks

no code implementations1 May 2020 Clare Lyle, Mark van der Wilk, Marta Kwiatkowska, Yarin Gal, Benjamin Bloem-Reddy

Many real world data analysis problems exhibit invariant structure, and models that take advantage of this structure have shown impressive empirical performance, particularly in deep learning.

Data Augmentation

Unpacking Information Bottlenecks: Unifying Information-Theoretic Objectives in Deep Learning

no code implementations27 Mar 2020 Andreas Kirsch, Clare Lyle, Yarin Gal

The Information Bottleneck principle offers both a mechanism to explain how deep neural networks train and generalize, as well as a regularized objective with which to train models.

Density Estimation

Invariant Causal Prediction for Block MDPs

1 code implementation ICML 2020 Amy Zhang, Clare Lyle, Shagun Sodhani, Angelos Filos, Marta Kwiatkowska, Joelle Pineau, Yarin Gal, Doina Precup

Generalization across environments is critical to the successful application of reinforcement learning algorithms to real-world challenges.

Causal Inference Variable Selection

A Comparative Analysis of Expected and Distributional Reinforcement Learning

no code implementations30 Jan 2019 Clare Lyle, Pablo Samuel Castro, Marc G. Bellemare

Since their introduction a year ago, distributional approaches to reinforcement learning (distributional RL) have produced strong results relative to the standard approach which models expected values (expected RL).

Distributional Reinforcement Learning reinforcement-learning

GAN Q-learning

1 code implementation13 May 2018 Thang Doan, Bogdan Mazoure, Clare Lyle

Distributional reinforcement learning (distributional RL) has seen empirical success in complex Markov Decision Processes (MDPs) in the setting of nonlinear function approximation.

Distributional Reinforcement Learning OpenAI Gym +2

Cannot find the paper you are looking for? You can Submit a new open access paper.