1 code implementation • ICLR 2022 • Jonas Geiping, Micah Goldblum, Phillip E. Pope, Michael Moeller, Tom Goldstein
It is widely believed that the implicit regularization of SGD is fundamental to the impressive generalization behavior we observe in neural networks.
1 code implementation • CVPR 2019 • Phillip E. Pope, Soheil Kolouri, Mohammad Rostami, Charles E. Martin, Heiko Hoffmann
With the growing use of graph convolutional neural networks (GCNNs) comes the need for explainability.
1 code implementation • ICLR 2019 • Soheil Kolouri, Phillip E. Pope, Charles E. Martin, Gustavo K. Rohde
In this paper we use the geometric properties of the optimal transport (OT) problem and the Wasserstein distances to define a prior distribution for the latent space of an auto-encoder.
5 code implementations • 5 Apr 2018 • Soheil Kolouri, Phillip E. Pope, Charles E. Martin, Gustavo K. Rohde
In short, we regularize the autoencoder loss with the sliced-Wasserstein distance between the distribution of the encoded training samples and a predefined samplable distribution.