no code implementations • ICML 2018 • Jerry Li, Aleksander Madry, John Peebles, Ludwig Schmidt
While Generative Adversarial Networks (GANs) have demonstrated promising performance on multiple vision tasks, their learning dynamics are not yet well understood, both in theory and in practice.
no code implementations • 10 Apr 2018 • Ilias Diakonikolas, Daniel M. Kane, John Peebles
We give the first identity tester for this problem with {\em sub-learning} sample complexity in any fixed dimension and a nearly-matching sample complexity lower bound.
no code implementations • 9 Aug 2017 • Ilias Diakonikolas, Themis Gouleakis, John Peebles, Eric Price
Our new upper and lower bounds show that the optimal sample complexity of identity testing is \[ \Theta\left( \frac{1}{\epsilon^2}\left(\sqrt{n \log(1/\delta)} + \log(1/\delta) \right)\right) \] for any $n, \varepsilon$, and $\delta$.
no code implementations • 11 Nov 2016 • Ilias Diakonikolas, Themis Gouleakis, John Peebles, Eric Price
We study the fundamental problems of (i) uniformity testing of a discrete distribution, and (ii) closeness testing between two discrete distributions with bounded $\ell_2$-norm.
no code implementations • ICLR 2018 • Jerry Li, Aleksander Madry, John Peebles, Ludwig Schmidt
This suggests that such usage of the first order approximation of the discriminator, which is a de-facto standard in all the existing GAN dynamics, might be one of the factors that makes GAN training so challenging in practice.
no code implementations • 6 Jul 2019 • Maryam Aliakbarpour, Themis Gouleakis, John Peebles, Ronitt Rubinfeld, Anak Yodpinyanee
We then build on these lower bounds to give $\Omega(n/\log{n})$ lower bounds for testing monotonicity over a matching poset of size $n$ and significantly improved lower bounds over the hypercube poset.
no code implementations • 14 Sep 2020 • Ilias Diakonikolas, Themis Gouleakis, Daniel M. Kane, John Peebles, Eric Price
To illustrate the generality of our methods, we give optimal algorithms for testing collections of distributions and testing closeness with unequal sized samples.
1 code implementation • ECCV 2020 • William Peebles, John Peebles, Jun-Yan Zhu, Alexei Efros, Antonio Torralba
In this paper, we propose the Hessian Penalty, a simple regularization term that encourages the Hessian of a generative model with respect to its input to be diagonal.