Search Results for author: Matt Jordan

Found 7 papers, 4 papers with code

Conditional Generative Models are Sufficient to Sample from Any Causal Effect Estimand

no code implementations12 Feb 2024 Md Musfiqur Rahman, Matt Jordan, Murat Kocaoglu

As an application of our algorithm, we evaluate two large conditional generative models that are pre-trained on the CelebA dataset by analyzing the strength of spurious correlations and the level of disentanglement they achieve.

Causal Inference Disentanglement

Zonotope Domains for Lagrangian Neural Network Verification

no code implementations14 Oct 2022 Matt Jordan, Jonathan Hayase, Alexandros G. Dimakis, Sewoong Oh

Neural network verification aims to provide provable bounds for the output of a neural network for a given input range.

Inverse Problems Leveraging Pre-trained Contrastive Representations

1 code implementation NeurIPS 2021 Sriram Ravula, Georgios Smyrnis, Matt Jordan, Alexandros G. Dimakis

The problem is to recover the representation of an image R(x), if we are only given a corrupted version A(x), for some known forward operator A.

Representation Learning

Provable Lipschitz Certification for Generative Models

1 code implementation6 Jul 2021 Matt Jordan, Alexandros G. Dimakis

We present a scalable technique for upper bounding the Lipschitz constant of generative models.

Exactly Computing the Local Lipschitz Constant of ReLU Networks

1 code implementation NeurIPS 2020 Matt Jordan, Alexandros G. Dimakis

The local Lipschitz constant of a neural network is a useful metric with applications in robustness, generalization, and fairness evaluation.

Fairness

Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes

1 code implementation NeurIPS 2019 Matt Jordan, Justin Lewis, Alexandros G. Dimakis

We relate the problem of computing pointwise robustness of these networks to that of computing the maximum norm ball with a fixed center that can be contained in a non-convex polytope.

Quantifying Perceptual Distortion of Adversarial Examples

no code implementations21 Feb 2019 Matt Jordan, Naren Manoj, Surbhi Goel, Alexandros G. Dimakis

To demonstrate the value of quantifying the perceptual distortion of adversarial examples, we present and employ a unifying framework fusing different attack styles.

SSIM

Cannot find the paper you are looking for? You can Submit a new open access paper.