Search Results for author: Takuo Matsubara

Found 6 papers, 5 papers with code

Wasserstein Gradient Boosting: A General Framework with Applications to Posterior Regression

1 code implementation15 May 2024 Takuo Matsubara

In probabilistic prediction, a parametric probability distribution is often specified on the space of output variables, and a point estimate of the output-distribution parameter is produced for each input by a model.


TCE: A Test-Based Approach to Measuring Calibration Error

1 code implementation25 Jun 2023 Takuo Matsubara, Niek Tax, Richard Mudd, Ido Guy

This paper proposes a new metric to measure the calibration error of probabilistic binary classifiers, called test-based calibration error (TCE).

Generalised Bayesian Inference for Discrete Intractable Likelihood

1 code implementation16 Jun 2022 Takuo Matsubara, Jeremias Knoblauch, François-Xavier Briol, Chris. J. Oates

Discrete state spaces represent a major computational challenge to statistical inference, since the computation of normalisation constants requires summation over large or possibly infinite sets, which can be impractical.

Bayesian Inference

Robust Generalised Bayesian Inference for Intractable Likelihoods

1 code implementation15 Apr 2021 Takuo Matsubara, Jeremias Knoblauch, François-Xavier Briol, Chris. J. Oates

Generalised Bayesian inference updates prior beliefs using a loss function, rather than a likelihood, and can therefore be used to confer robustness against possible mis-specification of the likelihood.

Bayesian Inference

The Ridgelet Prior: A Covariance Function Approach to Prior Specification for Bayesian Neural Networks

1 code implementation16 Oct 2020 Takuo Matsubara, Chris J. Oates, François-Xavier Briol

Our approach constructs a prior distribution for the parameters of the network, called a ridgelet prior, that approximates the posited Gaussian process in the output space of the network.

Gaussian Processes

The global optimum of shallow neural network is attained by ridgelet transform

no code implementations19 May 2018 Sho Sonoda, Isao Ishikawa, Masahiro Ikeda, Kei Hagihara, Yoshihiro Sawano, Takuo Matsubara, Noboru Murata

We prove that the global minimum of the backpropagation (BP) training problem of neural networks with an arbitrary nonlinear activation is given by the ridgelet transform.

Cannot find the paper you are looking for? You can Submit a new open access paper.