no code implementations • 14 Feb 2022 • Elan Rosenfeld, Pradeep Ravikumar, Andrej Risteski
A common explanation for the failure of deep networks to generalize out-of-distribution is that they fail to recover the "correct" features.
no code implementations • ICLR 2022 • Bingbin Liu, Elan Rosenfeld, Pradeep Ravikumar, Andrej Risteski
Noise-contrastive estimation (NCE) is a statistically consistent method for learning unnormalized probabilistic models.
no code implementations • ICLR 2022 • Ifigeneia Apostolopoulou, Ian Char, Elan Rosenfeld, Artur Dubrawski
Moreover, the architecture for this class of models favors local interactions among the latent variables between neighboring layers when designing the conditioning factors of the involved distributions.
no code implementations • 18 Jun 2021 • Yining Chen, Elan Rosenfeld, Mark Sellke, Tengyu Ma, Andrej Risteski
Domain generalization aims at performing well on unseen test environments with data from a limited number of training environments.
no code implementations • 25 Feb 2021 • Elan Rosenfeld, Pradeep Ravikumar, Andrej Risteski
A popular assumption for out-of-distribution generalization is that the training data comprises sub-datasets, each drawn from a distinct distribution; the goal is then to "interpolate" these distributions and "extrapolate" beyond them -- this objective is broadly known as domain generalization.
no code implementations • ICLR 2021 • Elan Rosenfeld, Pradeep Ravikumar, Andrej Risteski
We furthermore present the very first results in the non-linear regime: we demonstrate that IRM can fail catastrophically unless the test data are sufficiently similar to the training distribution--this is precisely the issue that it was intended to solve.
no code implementations • 10 Jul 2020 • Ifigeneia Apostolopoulou, Elan Rosenfeld, Artur Dubrawski
The Variational Autoencoder (VAE) is a powerful framework for learning probabilistic latent variable generative models.
no code implementations • ICML 2020 • Elan Rosenfeld, Ezra Winston, Pradeep Ravikumar, J. Zico Kolter
Machine learning algorithms are known to be susceptible to data poisoning attacks, where an adversary manipulates the training data to degrade performance of the resulting classifier.
no code implementations • 25 Sep 2019 • Elan Rosenfeld, Ezra Winston, Pradeep Ravikumar, J. Zico Kolter
This paper considers label-flipping attacks, a type of data poisoning attack where an adversary relabels a small number of examples in a training set in order to degrade the performance of the resulting classifier.
7 code implementations • 8 Feb 2019 • Jeremy M Cohen, Elan Rosenfeld, J. Zico Kolter
We show how to turn any classifier that classifies well under Gaussian noise into a new classifier that is certifiably robust to adversarial perturbations under the $\ell_2$ norm.
Ranked #2 on
Robust classification
on ImageNet