Search Results for author: Tim G. J. Rudner

Found 29 papers, 19 papers with code

Inter-domain Deep Gaussian Processes with RKHS Fourier Features

no code implementations ICML 2020 Tim G. J. Rudner, Dino Sejdinovic, Yarin Gal

We propose Inter-domain Deep Gaussian Processes with RKHS Fourier Features, an extension of shallow inter-domain GPs that combines the advantages of inter-domain and deep Gaussian processes (DGPs) and demonstrate how to leverage existing approximate inference approaches to perform simple and scalable approximate inference on Inter-domain Deep Gaussian Processes.

Gaussian Processes

Non-Vacuous Generalization Bounds for Large Language Models

no code implementations28 Dec 2023 Sanae Lotfi, Marc Finzi, Yilun Kuang, Tim G. J. Rudner, Micah Goldblum, Andrew Gordon Wilson

Modern language models can contain billions of parameters, raising the question of whether they can generalize beyond the training data or simply regurgitate their training corpora.

Generalization Bounds valid

Continual Learning via Sequential Function-Space Variational Inference

no code implementations28 Dec 2023 Tim G. J. Rudner, Freddie Bickford Smith, Qixuan Feng, Yee Whye Teh, Yarin Gal

Sequential Bayesian inference over predictive functions is a natural framework for continual learning from streams of data.

Bayesian Inference Continual Learning +2

Tractable Function-Space Variational Inference in Bayesian Neural Networks

1 code implementation28 Dec 2023 Tim G. J. Rudner, Zonghao Chen, Yee Whye Teh, Yarin Gal

Recognizing that the primary object of interest in most settings is the distribution over functions induced by the posterior distribution over neural network parameters, we frame Bayesian inference in neural networks explicitly as inferring a posterior distribution over functions and propose a scalable function-space variational inference method that allows incorporating prior information and results in reliable predictive uncertainty estimates.

Bayesian Inference Medical Diagnosis +1

Visual Explanations of Image-Text Representations via Multi-Modal Information Bottleneck Attribution

1 code implementation NeurIPS 2023 Ying Wang, Tim G. J. Rudner, Andrew Gordon Wilson

Vision-language pretrained models have seen remarkable success, but their application to safety-critical settings is limited by their lack of interpretability.

Can Active Sampling Reduce Causal Confusion in Offline Reinforcement Learning?

1 code implementation28 Dec 2023 Gunshi Gupta, Tim G. J. Rudner, Rowan Thomas McAllister, Adrien Gaidon, Yarin Gal

To answer this question, we consider a set of tailored offline reinforcement learning datasets that exhibit causal ambiguity and assess the ability of active sampling techniques to reduce causal confusion at evaluation.

reinforcement-learning

Function-Space Regularization in Neural Networks: A Probabilistic Perspective

1 code implementation28 Dec 2023 Tim G. J. Rudner, Sanyam Kapoor, Shikai Qiu, Andrew Gordon Wilson

In this work, we approach regularization in neural networks from a probabilistic perspective and show that by viewing parameter-space regularization as specifying an empirical prior distribution over the model parameters, we can derive a probabilistically well-motivated regularization technique that allows explicitly encoding information about desired predictive functions into neural network training.

Should We Learn Most Likely Functions or Parameters?

1 code implementation NeurIPS 2023 Shikai Qiu, Tim G. J. Rudner, Sanyam Kapoor, Andrew Gordon Wilson

Moreover, the most likely parameters under the parameter posterior do not generally correspond to the most likely function induced by the parameter posterior.

Informative Priors Improve the Reliability of Multimodal Clinical Data Classification

no code implementations17 Nov 2023 L. Julian Lechuga Lopez, Tim G. J. Rudner, Farah E. Shamout

We use simple and scalable Gaussian mean-field variational inference to train a Bayesian neural network using the M2D2 prior.

Time Series Variational Inference

Drug Discovery under Covariate Shift with Domain-Informed Prior Distributions over Functions

1 code implementation14 Jul 2023 Leo Klarner, Tim G. J. Rudner, Michael Reutlinger, Torsten Schindler, Garrett M. Morris, Charlotte Deane, Yee Whye Teh

Accelerating the discovery of novel and more effective therapeutics is an important pharmaceutical problem in which deep learning is playing an increasingly significant role.

Domain Adaptation Drug Discovery

A Study of Bayesian Neural Network Surrogates for Bayesian Optimization

2 code implementations31 May 2023 Yucen Lily Li, Tim G. J. Rudner, Andrew Gordon Wilson

Bayesian optimization is a highly efficient approach to optimizing objective functions which are expensive to query.

Bayesian Optimization

An Information-Theoretic Perspective on Variance-Invariance-Covariance Regularization

no code implementations1 Mar 2023 Ravid Shwartz-Ziv, Randall Balestriero, Kenji Kawaguchi, Tim G. J. Rudner, Yann Lecun

In this paper, we provide an information-theoretic perspective on Variance-Invariance-Covariance Regularization (VICReg) for self-supervised learning.

Self-Supervised Learning Transfer Learning

On Sequential Bayesian Inference for Continual Learning

1 code implementation4 Jan 2023 Samuel Kessler, Adam Cobb, Tim G. J. Rudner, Stefan Zohren, Stephen J. Roberts

Sequential Bayesian inference can be used for continual learning to prevent catastrophic forgetting of past tasks and provide an informative prior when learning new tasks.

Bayesian Inference Continual Learning +1

On Pathologies in KL-Regularized Reinforcement Learning from Expert Demonstrations

1 code implementation NeurIPS 2021 Tim G. J. Rudner, Cong Lu, Michael A. Osborne, Yarin Gal, Yee Whye Teh

KL-regularized reinforcement learning from expert demonstrations has proved successful in improving the sample efficiency of deep reinforcement learning algorithms, allowing them to be applied to challenging physical real-world tasks.

reinforcement-learning Reinforcement Learning (RL)

Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations

2 code implementations9 Jun 2022 Cong Lu, Philip J. Ball, Tim G. J. Rudner, Jack Parker-Holder, Michael A. Osborne, Yee Whye Teh

Using this suite of benchmarking tasks, we show that simple modifications to two popular vision-based online reinforcement learning algorithms, DreamerV2 and DrQ-v2, suffice to outperform existing offline RL methods and establish competitive baselines for continuous control in the visual domain.

Benchmarking Continuous Control +3

Outcome-Driven Reinforcement Learning via Variational Inference

no code implementations NeurIPS 2021 Tim G. J. Rudner, Vitchyr H. Pong, Rowan Mcallister, Yarin Gal, Sergey Levine

While reinforcement learning algorithms provide automated acquisition of optimal policies, practical application of such methods requires a number of design decisions, such as manually designing reward functions that not only define the task, but also provide sufficient shaping to accomplish it.

reinforcement-learning Reinforcement Learning (RL) +1

Inter-domain Deep Gaussian Processes

no code implementations1 Nov 2020 Tim G. J. Rudner, Dino Sejdinovic, Yarin Gal

We propose Inter-domain Deep Gaussian Processes, an extension of inter-domain shallow GPs that combines the advantages of inter-domain and deep Gaussian processes (DGPs), and demonstrate how to leverage existing approximate inference methods to perform simple and scalable approximate inference using inter-domain features in DGPs.

Gaussian Processes

On Signal-to-Noise Ratio Issues in Variational Inference for Deep Gaussian Processes

1 code implementation1 Nov 2020 Tim G. J. Rudner, Oscar Key, Yarin Gal, Tom Rainforth

We show that the gradient estimates used in training Deep Gaussian Processes (DGPs) with importance-weighted variational inference are susceptible to signal-to-noise ratio (SNR) issues.

Gaussian Processes Variational Inference

A Systematic Comparison of Bayesian Deep Learning Robustness in Diabetic Retinopathy Tasks

1 code implementation22 Dec 2019 Angelos Filos, Sebastian Farquhar, Aidan N. Gomez, Tim G. J. Rudner, Zachary Kenton, Lewis Smith, Milad Alizadeh, Arnoud de Kroon, Yarin Gal

From our comparison we conclude that some current techniques which solve benchmarks such as UCI `overfit' their uncertainty to the dataset---when evaluated on our benchmark these underperform in comparison to simpler baselines.

Out-of-Distribution Detection

Multi$^{\mathbf{3}}$Net: Segmenting Flooded Buildings via Fusion of Multiresolution, Multisensor, and Multitemporal Satellite Imagery

1 code implementation5 Dec 2018 Tim G. J. Rudner, Marc Rußwurm, Jakub Fil, Ramona Pelich, Benjamin Bischke, Veronika Kopackova, Piotr Bilinski

We propose a novel approach for rapid segmentation of flooded buildings by fusing multiresolution, multisensor, and multitemporal satellite imagery in a convolutional neural network.

Flooded Building Segmentation Segmentation

VIREL: A Variational Inference Framework for Reinforcement Learning

1 code implementation NeurIPS 2019 Matthew Fellows, Anuj Mahajan, Tim G. J. Rudner, Shimon Whiteson

This gives VIREL a mode-seeking form of KL divergence, the ability to learn deterministic optimal polices naturally from inference and the ability to optimise value functions and policies in separate, iterative steps.

reinforcement-learning Reinforcement Learning (RL) +1

Cannot find the paper you are looking for? You can Submit a new open access paper.