no code implementations • 19 Apr 2023 • Jonah Botvinick-Greenhouse, Yunan Yang, Romit Maulik
Motivated by the computational difficulties incurred by popular deep learning algorithms for the generative modeling of temporal densities, we propose a cheap alternative which requires minimal hyperparameter tuning and scales favorably to high dimensional problems.
no code implementations • 8 Feb 2023 • Björn Engquist, Kui Ren, Yunan Yang
This paper develops and analyzes a stochastic derivative-free optimization strategy.
no code implementations • 26 Jan 2023 • Roberto Molinaro, Yunan Yang, Björn Engquist, Siddhartha Mishra
A large class of inverse problems for PDEs are only well-defined as mappings from operators to functions.
no code implementations • 28 May 2022 • Annan Yu, Yunan Yang, Alex Townsend
Small generalization errors of over-parameterized neural networks (NNs) can be partially explained by the frequency biasing phenomenon, where gradient-based algorithms minimize the low-frequency misfit before reducing the high-frequency residuals.
no code implementations • 12 Apr 2022 • Björn Engquist, Kui Ren, Yunan Yang
We propose a new gradient descent algorithm with added stochastic terms for finding the global optimizers of nonconvex optimization problems.
no code implementations • 13 Feb 2022 • Levon Nurbekyan, Wanzhou Lei, Yunan Yang
We propose efficient numerical schemes for implementing the natural gradient descent (NGD) for a broad range of metric spaces with applications to PDE-based optimization problems.
no code implementations • ICLR 2022 • Björn Engquist, Kui Ren, Yunan Yang
The generalization capacity of various machine learning models exhibits different phenomena in the under- and over-parameterized regimes.
no code implementations • 15 Nov 2019 • Bjorn Engquist, Kui Ren, Yunan Yang
This work characterizes, analytically and numerically, two major effects of the quadratic Wasserstein ($W_2$) distance as the measure of data discrepancy in computational solutions of inverse problems.