no code implementations • 16 Mar 2023 • Jorio Cocola, John Tencer, Francesco Rizzi, Eric Parish, Patrick Blonigan
In this work, we propose and analyze a novel method that overcomes this disadvantage by training a neural network only on subsampled versions of the high-fidelity solution snapshots.
no code implementations • 24 Apr 2022 • Jorio Cocola
In this work we answer this question, proving that a signal in the range of a Gaussian generative network can be recovered from few linear measurements provided that the width of the layers is proportional to the input layer size (up to log factors).
1 code implementation • 8 Mar 2022 • Sean Gunn, Jorio Cocola, Paul Hand
For both of these inversion algorithms, we introduce a new regularized GAN training algorithm and demonstrate that the learned generative model results in lower reconstruction errors across a wide range of under sampling ratios when solving compressed sensing, inpainting, and super-resolution problems.
no code implementations • 14 Jun 2020 • Jorio Cocola, Paul Hand
Sobolev loss is used when training a network to approximate the values and derivatives of a target function at a prescribed set of input points.
no code implementations • NeurIPS 2020 • Jorio Cocola, Paul Hand, Vladislav Voroninski
Many problems in statistics and machine learning require the reconstruction of a rank-one signal matrix from noisy data.