Search Results for author: Phillip E. Pope

Found 4 papers, 4 papers with code

Stochastic Training is Not Necessary for Generalization

1 code implementation ICLR 2022 Jonas Geiping, Micah Goldblum, Phillip E. Pope, Michael Moeller, Tom Goldstein

It is widely believed that the implicit regularization of SGD is fundamental to the impressive generalization behavior we observe in neural networks.

Data Augmentation

Sliced Wasserstein Auto-Encoders

1 code implementation ICLR 2019 Soheil Kolouri, Phillip E. Pope, Charles E. Martin, Gustavo K. Rohde

In this paper we use the geometric properties of the optimal transport (OT) problem and the Wasserstein distances to define a prior distribution for the latent space of an auto-encoder.

Sliced-Wasserstein Autoencoder: An Embarrassingly Simple Generative Model

5 code implementations5 Apr 2018 Soheil Kolouri, Phillip E. Pope, Charles E. Martin, Gustavo K. Rohde

In short, we regularize the autoencoder loss with the sliced-Wasserstein distance between the distribution of the encoded training samples and a predefined samplable distribution.

Cannot find the paper you are looking for? You can Submit a new open access paper.