Search Results for author: Russell Tsuchida

Found 12 papers, 6 papers with code

Exact, Fast and Expressive Poisson Point Processes via Squared Neural Families

1 code implementation14 Feb 2024 Russell Tsuchida, Cheng Soon Ong, Dino Sejdinovic

Maximum likelihood and maximum a posteriori estimates in a reparameterisation of the final layer of the intensity function can be obtained by solving a (strongly) convex optimisation problem using projected gradient descent.

Gaussian Processes Point Processes

Gaussian Ensemble Belief Propagation for Efficient Inference in High-Dimensional Systems

no code implementations13 Feb 2024 Dan MacKinlay, Russell Tsuchida, Dan Pagendam, Petra Kuhnert

The use of local messages in a graphical model structure ensures that the approach is suited to distributed computing and can efficiently handle complex dependence structures.

Distributed Computing

Scalable Optimal Transport Methods in Machine Learning: A Contemporary Survey

1 code implementation8 May 2023 Abdelwahed Khamis, Russell Tsuchida, Mohamed Tarek, Vivien Rolland, Lars Petersson

This paper is about where and how optimal transport is used in machine learning with a focus on the question of scalable optimal transport.

Deep equilibrium models as estimators for continuous latent variables

1 code implementation11 Nov 2022 Russell Tsuchida, Cheng Soon Ong

We consider a generalised setting where the canonical parameters of the exponential family are a nonlinear transformation of the latents.

Gaussian Process Bandits with Aggregated Feedback

no code implementations24 Dec 2021 Mengyan Zhang, Russell Tsuchida, Cheng Soon Ong

We consider the continuum-armed bandits problem, under a novel setting of recommending the best arms within a fixed budget under aggregated feedback.

Declarative nets that are equilibrium models

no code implementations ICLR 2022 Russell Tsuchida, Suk Yee Yong, Mohammad Ali Armin, Lars Petersson, Cheng Soon Ong

We show that using a kernelised generalised linear model (kGLM) as an inner problem in a DDN yields a large class of commonly used DEQ architectures with a closed-form expression for the hidden layer parameters in terms of the kernel.

Avoiding Kernel Fixed Points: Computing with ELU and GELU Infinite Networks

1 code implementation20 Feb 2020 Russell Tsuchida, Tim Pearce, Chris van der Heide, Fred Roosta, Marcus Gallagher

Secondly, and more generally, we analyse the fixed-point dynamics of iterated kernels corresponding to a broad range of activation functions.

Gaussian Processes

Richer priors for infinitely wide multi-layer perceptrons

1 code implementation29 Nov 2019 Russell Tsuchida, Fred Roosta, Marcus Gallagher

The model resulting from partially exchangeable priors is a GP, with an additional level of inference in the sense that the prior and posterior predictive distributions require marginalisation over hyperparameters.

Expressive Priors in Bayesian Neural Networks: Kernel Combinations and Periodic Functions

1 code implementation15 May 2019 Tim Pearce, Russell Tsuchida, Mohamed Zaki, Alexandra Brintrup, Andy Neely

A simple, flexible approach to creating expressive priors in Gaussian process (GP) models makes new kernels from a combination of basic kernels, e. g. summing a periodic and linear kernel can capture seasonal variation with a long term trend.

reinforcement-learning Reinforcement Learning (RL)

Exchangeability and Kernel Invariance in Trained MLPs

no code implementations19 Oct 2018 Russell Tsuchida, Fred Roosta, Marcus Gallagher

In the analysis of machine learning models, it is often convenient to assume that the parameters are IID.

BIG-bench Machine Learning

Invariance of Weight Distributions in Rectified MLPs

no code implementations ICML 2018 Russell Tsuchida, Farbod Roosta-Khorasani, Marcus Gallagher

An interesting approach to analyzing neural networks that has received renewed attention is to examine the equivalent kernel of the neural network.

Cannot find the paper you are looking for? You can Submit a new open access paper.