One such approach uses generative models, trained on ground-truth images, as priors for inverse problems, penalizing reconstructions far from images the generator can produce.
Similarly to distillation approaches, our single network is trained to maximise the probability of samples from pre-trained probabilistic models, in this work we use a fixed ensemble of networks.
Based on this formulation, we are able to attribute the potential leakage of the training data in a deep network to its architecture.
Building on the previous work by Kazlauskaiteet al. , we include a separate monotonic warp of the input data to model temporal misalignment.
The success of generative regularisers depends on the quality of the generative model and so we propose a set of desired criteria to assess generative models and guide future research.
We present a novel approach to Bayesian inference and general Bayesian computation that is defined through a sequential decision loop.
Similarly, deep Gaussian processes (DGPs) should allow us to compute a posterior distribution of compositions of multiple functions giving rise to the observations.
Bayesian optimization (BO) methods often rely on the assumption that the objective function is well-behaved, but in practice, this is seldom true for real-world objectives even if noise-free observations can be collected.
We propose a new framework for imposing monotonicity constraints in a Bayesian nonparametric setting based on numerical solutions of stochastic differential equations.
The shape of an object is an important characteristic for many vision problems such as segmentation, detection and tracking.
We present a probabilistic model for unsupervised alignment of high-dimensional time-warped sequences based on the Dirichlet Process Mixture Model (DPMM).
We present a non-parametric Bayesian latent variable model capable of learning dependency structures across dimensions in a multivariate setting.
This paper demonstrates a novel scheme to incorporate a structured Gaussian likelihood prediction network within the VAE that allows the residual correlations to be modeled.
This paper is the first work to propose a network to predict a structured uncertainty distribution for a synthesized image.
We introduce Latent Gaussian Process Regression which is a latent variable extension allowing modelling of non-stationary multi-modal processes using GPs.
In contrast, our method makes use of a single RGB video as input; it can capture the deformations of generic shapes; and the depth estimation is dense, per-pixel and direct.
In this work we remove the image space alignment limitations of existing subspace models by conditioning the models on a shape dependent context that allows for the complex, non-linear structure of the appearance of the visual object to be captured and shared.
Under some specific circumstances, Expected Error Reduction has been one of the strongest-performing informativeness criteria for active learning.