Search Results for author: Thomas O'Leary-Roseberry

Found 9 papers, 6 papers with code

Efficient geometric Markov chain Monte Carlo for nonlinear Bayesian inversion enabled by derivative-informed neural operators

no code implementations13 Mar 2024 Lianghao Cao, Thomas O'Leary-Roseberry, Omar Ghattas

Furthermore, the training cost of DINO surrogates breaks even after collecting merely 10--25 effective posterior samples compared to geometric MCMC.

Operator learning

Efficient PDE-Constrained optimization under high-dimensional uncertainty using derivative-informed neural operators

1 code implementation31 May 2023 Dingcheng Luo, Thomas O'Leary-Roseberry, Peng Chen, Omar Ghattas

We propose a novel machine learning framework for solving optimization problems governed by large-scale partial differential equations (PDEs) with high-dimensional random parameters.

Residual-based error correction for neural operator accelerated infinite-dimensional Bayesian inverse problems

no code implementations6 Oct 2022 Lianghao Cao, Thomas O'Leary-Roseberry, Prashant K. Jha, J. Tinsley Oden, Omar Ghattas

We show that a trained neural operator with error correction can achieve a quadratic reduction of its approximation error, all while retaining substantial computational speedups of posterior sampling when models are governed by highly nonlinear PDEs.

Derivative-Informed Neural Operator: An Efficient Framework for High-Dimensional Parametric Derivative Learning

1 code implementation21 Jun 2022 Thomas O'Leary-Roseberry, Peng Chen, Umberto Villa, Omar Ghattas

We propose derivative-informed neural operators (DINOs), a general family of neural networks to approximate operators as infinite-dimensional mappings from input function spaces to output function spaces or quantities of interest.

Dimensionality Reduction Experimental Design

Learning High-Dimensional Parametric Maps via Reduced Basis Adaptive Residual Networks

2 code implementations14 Dec 2021 Thomas O'Leary-Roseberry, Xiaosong Du, Anirban Chaudhuri, Joaquim R. R. A. Martins, Karen Willcox, Omar Ghattas

We propose a scalable framework for the learning of high-dimensional parametric maps via adaptively constructed residual network (ResNet) maps between reduced bases of the inputs and outputs.

Experimental Design Vocal Bursts Intensity Prediction

Derivative-Informed Projected Neural Networks for High-Dimensional Parametric Maps Governed by PDEs

1 code implementation30 Nov 2020 Thomas O'Leary-Roseberry, Umberto Villa, Peng Chen, Omar Ghattas

We use the projection basis vectors in the active subspace as well as the principal output subspace to construct the weights for the first and last layers of the neural network, respectively.

Experimental Design Uncertainty Quantification

Ill-Posedness and Optimization Geometry for Nonlinear Neural Network Training

no code implementations7 Feb 2020 Thomas O'Leary-Roseberry, Omar Ghattas

We show that the nonlinear activation functions used in the network construction play a critical role in classifying stationary points of the loss landscape.

Low Rank Saddle Free Newton: A Scalable Method for Stochastic Nonconvex Optimization

2 code implementations7 Feb 2020 Thomas O'Leary-Roseberry, Nick Alger, Omar Ghattas

In this work we motivate the extension of Newton methods to the SA regime, and argue for the use of the scalable low rank saddle free Newton (LRSFN) method, which avoids forming the Hessian in favor of making a low rank approximation.

Second-order methods

Inexact Newton Methods for Stochastic Non-Convex Optimization with Applications to Neural Network Training

1 code implementation16 May 2019 Thomas O'Leary-Roseberry, Nick Alger, Omar Ghattas

We survey sub-sampled inexact Newton methods and consider their application in non-convex settings.

Optimization and Control Numerical Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.