Search Results for author: Tim G. J. Rudner

Found 18 papers, 10 papers with code

Inter-domain Deep Gaussian Processes with RKHS Fourier Features

no code implementations ICML 2020 Tim G. J. Rudner, Dino Sejdinovic, Yarin Gal

We propose Inter-domain Deep Gaussian Processes with RKHS Fourier Features, an extension of shallow inter-domain GPs that combines the advantages of inter-domain and deep Gaussian processes (DGPs) and demonstrate how to leverage existing approximate inference approaches to perform simple and scalable approximate inference on Inter-domain Deep Gaussian Processes.

Gaussian Processes

A Study of Bayesian Neural Network Surrogates for Bayesian Optimization

1 code implementation31 May 2023 Yucen Lily Li, Tim G. J. Rudner, Andrew Gordon Wilson

Bayesian optimization is a highly efficient approach to optimizing objective functions which are expensive to query.

Protein Design with Guided Discrete Diffusion

no code implementations31 May 2023 Nate Gruver, Samuel Stanton, Nathan C. Frey, Tim G. J. Rudner, Isidro Hotzel, Julien Lafrance-Vanasse, Arvind Rajpal, Kyunghyun Cho, Andrew Gordon Wilson

A popular approach to protein design is to combine a generative model with a discriminative model for conditional sampling.

An Information-Theoretic Perspective on Variance-Invariance-Covariance Regularization

no code implementations1 Mar 2023 Ravid Shwartz-Ziv, Randall Balestriero, Kenji Kawaguchi, Tim G. J. Rudner, Yann Lecun

In this paper, we provide an information-theoretic perspective on Variance-Invariance-Covariance Regularization (VICReg) for self-supervised learning.

Self-Supervised Learning Transfer Learning

On Sequential Bayesian Inference for Continual Learning

no code implementations4 Jan 2023 Samuel Kessler, Adam Cobb, Tim G. J. Rudner, Stefan Zohren, Stephen J. Roberts

Sequential Bayesian inference can be used for continual learning to prevent catastrophic forgetting of past tasks and provide an informative prior when learning new tasks.

Bayesian Inference Continual Learning +1

On Pathologies in KL-Regularized Reinforcement Learning from Expert Demonstrations

1 code implementation NeurIPS 2021 Tim G. J. Rudner, Cong Lu, Michael A. Osborne, Yarin Gal, Yee Whye Teh

KL-regularized reinforcement learning from expert demonstrations has proved successful in improving the sample efficiency of deep reinforcement learning algorithms, allowing them to be applied to challenging physical real-world tasks.

reinforcement-learning Reinforcement Learning (RL)

Outcome-Driven Reinforcement Learning via Variational Inference

no code implementations NeurIPS 2021 Tim G. J. Rudner, Vitchyr H. Pong, Rowan Mcallister, Yarin Gal, Sergey Levine

While reinforcement learning algorithms provide automated acquisition of optimal policies, practical application of such methods requires a number of design decisions, such as manually designing reward functions that not only define the task, but also provide sufficient shaping to accomplish it.

reinforcement-learning Reinforcement Learning (RL) +1

Inter-domain Deep Gaussian Processes

no code implementations1 Nov 2020 Tim G. J. Rudner, Dino Sejdinovic, Yarin Gal

We propose Inter-domain Deep Gaussian Processes, an extension of inter-domain shallow GPs that combines the advantages of inter-domain and deep Gaussian processes (DGPs), and demonstrate how to leverage existing approximate inference methods to perform simple and scalable approximate inference using inter-domain features in DGPs.

Gaussian Processes

On Signal-to-Noise Ratio Issues in Variational Inference for Deep Gaussian Processes

1 code implementation1 Nov 2020 Tim G. J. Rudner, Oscar Key, Yarin Gal, Tom Rainforth

We show that the gradient estimates used in training Deep Gaussian Processes (DGPs) with importance-weighted variational inference are susceptible to signal-to-noise ratio (SNR) issues.

Gaussian Processes Variational Inference

A Systematic Comparison of Bayesian Deep Learning Robustness in Diabetic Retinopathy Tasks

1 code implementation22 Dec 2019 Angelos Filos, Sebastian Farquhar, Aidan N. Gomez, Tim G. J. Rudner, Zachary Kenton, Lewis Smith, Milad Alizadeh, Arnoud de Kroon, Yarin Gal

From our comparison we conclude that some current techniques which solve benchmarks such as UCI `overfit' their uncertainty to the dataset---when evaluated on our benchmark these underperform in comparison to simpler baselines.

Out-of-Distribution Detection

Multi$^{\mathbf{3}}$Net: Segmenting Flooded Buildings via Fusion of Multiresolution, Multisensor, and Multitemporal Satellite Imagery

1 code implementation5 Dec 2018 Tim G. J. Rudner, Marc Rußwurm, Jakub Fil, Ramona Pelich, Benjamin Bischke, Veronika Kopackova, Piotr Bilinski

We propose a novel approach for rapid segmentation of flooded buildings by fusing multiresolution, multisensor, and multitemporal satellite imagery in a convolutional neural network.

Flooded Building Segmentation

VIREL: A Variational Inference Framework for Reinforcement Learning

1 code implementation NeurIPS 2019 Matthew Fellows, Anuj Mahajan, Tim G. J. Rudner, Shimon Whiteson

This gives VIREL a mode-seeking form of KL divergence, the ability to learn deterministic optimal polices naturally from inference and the ability to optimise value functions and policies in separate, iterative steps.

reinforcement-learning Reinforcement Learning (RL) +1

Cannot find the paper you are looking for? You can Submit a new open access paper.