no code implementations • 13 Nov 2023 • Liam Hodgkinson, Chris van der Heide, Robert Salomone, Fred Roosta, Michael W. Mahoney
Deep learning is renowned for its theory-practice gap, whereby principled theory typically fails to provide much beneficial guidance for implementation in practice.
no code implementations • 28 Jul 2023 • Conor Hassan, Robert Salomone, Kerrie Mengersen
This article provides a comprehensive synthesis of the recent developments in synthetic data generation via deep generative models, focusing on tabular datasets.
no code implementations • 15 Jul 2023 • Liam Hodgkinson, Chris van der Heide, Robert Salomone, Fred Roosta, Michael W. Mahoney
The problem of model selection is considered for the setting of interpolating estimators, where the number of model parameters exceeds the size of the dataset.
1 code implementation • 19 Apr 2023 • Katie Buchhorn, Edgar Santos-Fernandez, Kerrie Mengersen, Robert Salomone
We further examine the strengths and weaknesses of this baseline approach, GDN, in comparison to other benchmarking methods on complex real-world river network data.
no code implementations • 7 Feb 2023 • Conor Hassan, Robert Salomone, Kerrie Mengersen
Federated learning methods enable model training across distributed data sources without data leaving their original locations and have gained increasing interest in various fields.
1 code implementation • 22 Oct 2022 • Laurence Davies, Robert Salomone, Matthew Sutton, Christopher Drovandi
Reversible jump Markov chain Monte Carlo (RJMCMC) proposals that achieve reasonable acceptance rates and mixing are notoriously difficult to design in most applications.
no code implementations • 19 May 2022 • Matthew Sutton, Robert Salomone, Augustin Chevallier, Paul Fearnhead
We show how PDMPs, and particularly the Zig-Zag sampler, can be implemented to sample from such an extended distribution.
no code implementations • 25 Jan 2020 • Liam Hodgkinson, Robert Salomone, Fred Roosta
Stein importance sampling is a widely applicable technique based on kernelized Stein discrepancy, which corrects the output of approximate sampling algorithms by reweighting the empirical distribution of the samples.
no code implementations • 29 Mar 2019 • Liam Hodgkinson, Robert Salomone, Fred Roosta
Theoretical and algorithmic properties of the resulting sampling methods for $ \theta \in [0, 1] $ and a range of step sizes are established.