no code implementations • 12 Feb 2024 • Md Musfiqur Rahman, Matt Jordan, Murat Kocaoglu
As an application of our algorithm, we evaluate two large conditional generative models that are pre-trained on the CelebA dataset by analyzing the strength of spurious correlations and the level of disentanglement they achieve.
no code implementations • 14 Oct 2022 • Matt Jordan, Jonathan Hayase, Alexandros G. Dimakis, Sewoong Oh
Neural network verification aims to provide provable bounds for the output of a neural network for a given input range.
1 code implementation • NeurIPS 2021 • Sriram Ravula, Georgios Smyrnis, Matt Jordan, Alexandros G. Dimakis
The problem is to recover the representation of an image R(x), if we are only given a corrupted version A(x), for some known forward operator A.
1 code implementation • 6 Jul 2021 • Matt Jordan, Alexandros G. Dimakis
We present a scalable technique for upper bounding the Lipschitz constant of generative models.
1 code implementation • NeurIPS 2020 • Matt Jordan, Alexandros G. Dimakis
The local Lipschitz constant of a neural network is a useful metric with applications in robustness, generalization, and fairness evaluation.
1 code implementation • NeurIPS 2019 • Matt Jordan, Justin Lewis, Alexandros G. Dimakis
We relate the problem of computing pointwise robustness of these networks to that of computing the maximum norm ball with a fixed center that can be contained in a non-convex polytope.
no code implementations • 21 Feb 2019 • Matt Jordan, Naren Manoj, Surbhi Goel, Alexandros G. Dimakis
To demonstrate the value of quantifying the perceptual distortion of adversarial examples, we present and employ a unifying framework fusing different attack styles.