no code implementations • 7 Nov 2023 • Elan Rosenfeld, Andrej Risteski
We identify a new phenomenon in neural network optimization which arises from the interaction of depth and a particular heavy-tailed structure in natural data.
no code implementations • 5 Nov 2023 • Elan Rosenfeld, Nir Rosenfeld
The goal of strategic classification is to learn decision rules which are robust to strategic input manipulation.
no code implementations • 6 Oct 2023 • Sorawit Saengkyongam, Elan Rosenfeld, Pradeep Ravikumar, Niklas Pfister, Jonas Peters
In this paper, we consider the task of intervention extrapolation: predicting how interventions affect an outcome, even when those interventions are not observed at training time, and show that identifiable representations can provide an effective solution to this task even if the interventions affect the outcome non-linearly.
no code implementations • NeurIPS 2023 • Simon Buchholz, Goutham Rajendran, Elan Rosenfeld, Bryon Aragam, Bernhard Schölkopf, Pradeep Ravikumar
We study the problem of learning causal representations from unknown, latent interventions in a general setting, where the latent distribution is Gaussian but the mixing function is completely general.
no code implementations • 8 Oct 2022 • Elan Rosenfeld, Preetum Nakkiran, Hadi Pouransari, Oncel Tuzel, Fartash Faghri
Recent advances in learning aligned multimodal representations have been primarily driven by training large neural networks on massive, noisy paired-modality datasets.
2 code implementations • 14 Feb 2022 • Elan Rosenfeld, Pradeep Ravikumar, Andrej Risteski
Towards this end, we introduce Domain-Adjusted Regression (DARE), a convex objective for learning a linear predictor that is provably robust under a new model of distribution shift.
no code implementations • ICLR 2022 • Bingbin Liu, Elan Rosenfeld, Pradeep Ravikumar, Andrej Risteski
Noise-contrastive estimation (NCE) is a statistically consistent method for learning unnormalized probabilistic models.
no code implementations • ICLR 2022 • Ifigeneia Apostolopoulou, Ian Char, Elan Rosenfeld, Artur Dubrawski
Moreover, the architecture for this class of models favors local interactions among the latent variables between neighboring layers when designing the conditioning factors of the involved distributions.
no code implementations • 18 Jun 2021 • Yining Chen, Elan Rosenfeld, Mark Sellke, Tengyu Ma, Andrej Risteski
Domain generalization aims at performing well on unseen test environments with data from a limited number of training environments.
no code implementations • 25 Feb 2021 • Elan Rosenfeld, Pradeep Ravikumar, Andrej Risteski
A popular assumption for out-of-distribution generalization is that the training data comprises sub-datasets, each drawn from a distinct distribution; the goal is then to "interpolate" these distributions and "extrapolate" beyond them -- this objective is broadly known as domain generalization.
no code implementations • ICLR 2021 • Elan Rosenfeld, Pradeep Ravikumar, Andrej Risteski
We furthermore present the very first results in the non-linear regime: we demonstrate that IRM can fail catastrophically unless the test data are sufficiently similar to the training distribution--this is precisely the issue that it was intended to solve.
no code implementations • 10 Jul 2020 • Ifigeneia Apostolopoulou, Elan Rosenfeld, Artur Dubrawski
The Variational Autoencoder (VAE) is a powerful framework for learning probabilistic latent variable generative models.
no code implementations • ICML 2020 • Elan Rosenfeld, Ezra Winston, Pradeep Ravikumar, J. Zico Kolter
Machine learning algorithms are known to be susceptible to data poisoning attacks, where an adversary manipulates the training data to degrade performance of the resulting classifier.
no code implementations • 25 Sep 2019 • Elan Rosenfeld, Ezra Winston, Pradeep Ravikumar, J. Zico Kolter
This paper considers label-flipping attacks, a type of data poisoning attack where an adversary relabels a small number of examples in a training set in order to degrade the performance of the resulting classifier.
10 code implementations • 8 Feb 2019 • Jeremy M Cohen, Elan Rosenfeld, J. Zico Kolter
We show how to turn any classifier that classifies well under Gaussian noise into a new classifier that is certifiably robust to adversarial perturbations under the $\ell_2$ norm.
Ranked #3 on Robust classification on CIFAR-10