no code implementations • 5 Sep 2022 • Tiangang Cui, Sergey Dolgov, Robert Scheichl
We approximate the optimal importance distribution in a general importance sampling problem as the pushforward of a reference distribution under a composition of order-preserving transformations, in which each transformation is formed by a squared tensor-train decomposition.
no code implementations • 10 Dec 2020 • Mikkel B. Lykkegaard, Grigorios Mingas, Robert Scheichl, Colin Fox, Tim J. Dodwell
Uncertainty Quantification through Markov Chain Monte Carlo (MCMC) can be prohibitively expensive for target probability densities with expensive likelihood functions, for instance when the evaluation it involves solving a Partial Differential Equation (PDE), as is the case in a wide range of engineering applications.
Probabilistic Programming Computation
1 code implementation • 7 Aug 2020 • Karl Jansen, Eike Hermann Müller, Robert Scheichl
This paper discusses hierarchical sampling methods to tame this growth in autocorrelations.
High Energy Physics - Lattice Numerical Analysis Numerical Analysis Computational Physics 81-08, 81T25, 65Y20, 60J22 F.2; J.2
1 code implementation • 25 May 2019 • Jakob Kruse, Gianluca Detommaso, Ullrich Köthe, Robert Scheichl
Many recent invertible neural architectures are based on coupling block designs where variables are divided in two subsets which serve as inputs of an easily invertible (usually affine) triangular transformation.
1 code implementation • 2 Oct 2018 • Sergey Dolgov, Karim Anaya-Izquierdo, Colin Fox, Robert Scheichl
We find that the importance-weight corrected quasi-Monte Carlo quadrature performs best in all computed examples, and is orders-of-magnitude more efficient than DRAM across a wide range of approximation accuracies and sample sizes.
Numerical Analysis Probability Statistics Theory Statistics Theory 65D15, 65D32, 65C05, 65C40, 65C60, 62F15, 15A69, 15A23
1 code implementation • NeurIPS 2018 • Gianluca Detommaso, Tiangang Cui, Alessio Spantini, Youssef Marzouk, Robert Scheichl
Stein variational gradient descent (SVGD) was recently proposed as a general purpose nonparametric variational inference algorithm [Liu & Wang, NIPS 2016]: it minimizes the Kullback-Leibler divergence between the target distribution and its approximation by implementing a form of functional gradient descent on a reproducing kernel Hilbert space.