Search Results for author: Alex Damian

Found 4 papers, 2 papers with code

Self-Stabilization: The Implicit Bias of Gradient Descent at the Edge of Stability

1 code implementation30 Sep 2022 Alex Damian, Eshaan Nichani, Jason D. Lee

Our analysis provides precise predictions for the loss, sharpness, and deviation from the PGD trajectory throughout training, which we verify both empirically in a number of standard settings and theoretically under mild conditions.

Neural Networks can Learn Representations with Gradient Descent

no code implementations30 Jun 2022 Alex Damian, Jason D. Lee, Mahdi Soltanolkotabi

Furthermore, in a transfer learning setup where the data distributions in the source and target domain share the same representation $U$ but have different polynomial heads we show that a popular heuristic for transfer learning has a target sample complexity independent of $d$.

Transfer Learning

Label Noise SGD Provably Prefers Flat Global Minimizers

no code implementations NeurIPS 2021 Alex Damian, Tengyu Ma, Jason D. Lee

In overparametrized models, the noise in stochastic gradient descent (SGD) implicitly regularizes the optimization trajectory and determines which local minimum SGD converges to.

New Techniques for Preserving Global Structure and Denoising with Low Information Loss in Single-Image Super-Resolution

1 code implementation9 May 2018 Yijie Bei, Alex Damian, Shijia Hu, Sachit Menon, Nikhil Ravi, Cynthia Rudin

This work identifies and addresses two important technical challenges in single-image super-resolution: (1) how to upsample an image without magnifying noise and (2) how to preserve large scale structure when upsampling.

Denoising Image Super-Resolution

Cannot find the paper you are looking for? You can Submit a new open access paper.