no code implementations • 13 Mar 2024 • Lianghao Cao, Thomas O'Leary-Roseberry, Omar Ghattas
Furthermore, the training cost of DINO surrogates breaks even after collecting merely 10--25 effective posterior samples compared to geometric MCMC.
1 code implementation • 31 May 2023 • Dingcheng Luo, Thomas O'Leary-Roseberry, Peng Chen, Omar Ghattas
We propose a novel machine learning framework for solving optimization problems governed by large-scale partial differential equations (PDEs) with high-dimensional random parameters.
no code implementations • 6 Oct 2022 • Lianghao Cao, Thomas O'Leary-Roseberry, Prashant K. Jha, J. Tinsley Oden, Omar Ghattas
We show that a trained neural operator with error correction can achieve a quadratic reduction of its approximation error, all while retaining substantial computational speedups of posterior sampling when models are governed by highly nonlinear PDEs.
no code implementations • 22 Jun 2022 • Ricardo Baptista, Lianghao Cao, Joshua Chen, Omar Ghattas, Fengyi Li, Youssef M. Marzouk, J. Tinsley Oden
We tackle this challenging Bayesian inference problem using a likelihood-free approach based on measure transport together with the construction of summary statistics for the image data.
1 code implementation • 21 Jun 2022 • Thomas O'Leary-Roseberry, Peng Chen, Umberto Villa, Omar Ghattas
We propose derivative-informed neural operators (DINOs), a general family of neural networks to approximate operators as infinite-dimensional mappings from input function spaces to output function spaces or quantities of interest.
1 code implementation • 19 Apr 2022 • Alex Leviyev, Joshua Chen, Yifei Wang, Omar Ghattas, Aaron Zimmerman
Meanwhile, Stein variational Newton (SVN), a Newton-like extension of SVGD, dramatically accelerates the convergence of SVGD by incorporating Hessian information into the dynamics, but also produces biased samples.
2 code implementations • 14 Dec 2021 • Thomas O'Leary-Roseberry, Xiaosong Du, Anirban Chaudhuri, Joaquim R. R. A. Martins, Karen Willcox, Omar Ghattas
We propose a scalable framework for the learning of high-dimensional parametric maps via adaptively constructed residual network (ResNet) maps between reduced bases of the inputs and outputs.
1 code implementation • 12 Feb 2021 • Keyi Wu, Peng Chen, Omar Ghattas
Optimal experimental design (OED) plays an important role in the problem of identifying uncertainty with limited experimental data.
Optimization and Control Numerical Analysis Numerical Analysis
1 code implementation • 30 Nov 2020 • Thomas O'Leary-Roseberry, Umberto Villa, Peng Chen, Omar Ghattas
We use the projection basis vectors in the active subspace as well as the principal output subspace to construct the weights for the first and last layers of the neural network, respectively.
1 code implementation • 14 Feb 2020 • Nick Alger, Peng Chen, Omar Ghattas
We present a method for converting tensors into tensor train format based on actions of the tensor as a vector-valued multilinear function.
Numerical Analysis Numerical Analysis
1 code implementation • NeurIPS 2020 • Peng Chen, Omar Ghattas
The curse of dimensionality is a longstanding challenge in Bayesian inference in high dimensions.
no code implementations • 7 Feb 2020 • Thomas O'Leary-Roseberry, Omar Ghattas
We show that the nonlinear activation functions used in the network construction play a critical role in classifying stationary points of the loss landscape.
2 code implementations • 7 Feb 2020 • Thomas O'Leary-Roseberry, Nick Alger, Omar Ghattas
In this work we motivate the extension of Newton methods to the SA regime, and argue for the use of the scalable low rank saddle free Newton (LRSFN) method, which avoids forming the Hessian in favor of making a low rank approximation.
1 code implementation • NeurIPS 2019 • Peng Chen, Keyi Wu, Joshua Chen, Tom O'Leary-Roseberry, Omar Ghattas
We propose a projected Stein variational Newton (pSVN) method for high-dimensional Bayesian inference.
1 code implementation • NeurIPS 2019 • Amir Dezfouli, Hassan Ashtiani, Omar Ghattas, Richard Nock, Peter Dayan, Cheng Soon Ong
Individual characteristics in human decision-making are often quantified by fitting a parametric cognitive model to subjects' behavior and then studying differences between them in the associated parameter space.
1 code implementation • 16 May 2019 • Thomas O'Leary-Roseberry, Nick Alger, Omar Ghattas
We survey sub-sampled inexact Newton methods and consider their application in non-convex settings.
Optimization and Control Numerical Analysis