no code implementations • 11 May 2023 • Benedikt Lütke Schwienhorst, Lucas Kock, David J. Nott, Nadja Klein
A theoretical analysis shows that dropout regularization prefers rare but important features in both the mean and dispersion, generalizing an earlier result for conventional generalized linear models.
no code implementations • 31 Jan 2023 • Ryan P. Kelly, David J. Nott, David T. Frazier, David J. Warne, Chris Drovandi
Simulation-based inference techniques are indispensable for parameter estimation of mechanistic and simulable models with intractable likelihoods.
no code implementations • 12 Nov 2020 • Sanjay Chaudhuri, Subhroshekhar Ghosh, David J. Nott, Kim Cuc Pham
The expected log-likelihood is then estimated by an empirical likelihood where the only inputs required are a choice of summary statistic, it's observed value, and the ability to simulate the chosen summary statistics for any parameter value under the model.
no code implementations • 5 Oct 2020 • Nadja Klein, Michael Stanley Smith, David J. Nott
Using data from the Australian National Electricity Market, we show that our deep time series models provide accurate short term probabilistic price forecasts, with the copula model dominating.
no code implementations • 26 Aug 2019 • Nadja Klein, David J. Nott, Michael Stanley Smith
The end result is a scalable distributional DNN regression method with marginally calibrated predictions, and our work complements existing methods for probability calibration.
no code implementations • 3 Oct 2018 • Sanjay Chaudhuri, Subhro Ghosh, David J. Nott, Kim Cuc Pham
Many scientifically well-motivated statistical models in natural, engineering and environmental sciences are specified through a generative process, but in some cases it may not be possible to write down a likelihood for these models analytically.
no code implementations • 24 Jan 2018 • Matias Quiroz, David J. Nott, Robert Kohn
The variational parameters to be optimized are the mean vector and the covariance matrix of the approximation.
no code implementations • 18 May 2016 • Linda S. L. Tan, David J. Nott
We consider the problem of learning a Gaussian variational approximation to the posterior distribution for a high-dimensional parameter, where we impose sparsity in the precision matrix to reflect appropriate conditional independence structure in the model.
no code implementations • 30 Jul 2013 • David J. Nott, Minh-Ngoc Tran, Anthony Y. C. Kuk, Robert Kohn
We propose a divide and recombine strategy for the analysis of large datasets, which partitions a large dataset into smaller pieces and then combines the variational distributions that have been learnt in parallel on each separate piece using the hybrid Variational Bayes algorithm.
Methodology
no code implementations • 9 Jun 2013 • Linda S. L. Tan, Victor M. H. Ong, David J. Nott, Ajay Jasra
We develop a fast variational approximation scheme for Gaussian process (GP) regression, where the spectrum of the covariance function is subjected to a sparse approximation.
Computation