Search Results for author: Soheil Feizi

Found 72 papers, 28 papers with code

Lethal Dose Conjecture on Data Poisoning

no code implementations5 Aug 2022 Wenxiao Wang, Alexander Levine, Soheil Feizi

Deep Partition Aggregation (DPA) and its extension, Finite Aggregation (FA) are recent approaches for provable defenses against data poisoning, where they predict through the majority vote of many base models trained from different subsets of training set using a given learner.

Data Poisoning

Certifiably Robust Policy Learning against Adversarial Communication in Multi-agent Systems

no code implementations21 Jun 2022 Yanchao Sun, Ruijie Zheng, Parisa Hassanzadeh, Yongyuan Liang, Soheil Feizi, Sumitra Ganesh, Furong Huang

Communication is important in many multi-agent reinforcement learning (MARL) problems for agents to share information and make good decisions.

Multi-agent Reinforcement Learning

Interpretable Mixture of Experts for Structured Data

no code implementations5 Jun 2022 Aya Abdelsalam Ismail, Sercan Ö. Arik, Jinsung Yoon, Ankur Taly, Soheil Feizi, Tomas Pfister

We introduce a novel framework, Interpretable Mixture of Experts (IME), that provides interpretability for structured data while preserving accuracy.

Core Risk Minimization using Salient ImageNet

no code implementations28 Mar 2022 Sahil Singla, Mazda Moayeri, Soheil Feizi

Deep neural networks can be unreliable in the real world especially when they heavily use spurious features for their predictions.

Provable Adversarial Robustness for Fractional Lp Threat Models

1 code implementation16 Mar 2022 Alexander Levine, Soheil Feizi

Our approach builds on a recent work, Levine and Feizi (2021), which provides a provable defense against L_1 attacks.

Adversarial Robustness

Towards Better Understanding of Self-Supervised Representations

no code implementations3 Mar 2022 Neha Kalibhat, Kanika Narang, Liang Tan, Hamed Firooz, Maziar Sanjabi, Soheil Feizi

Next, we propose a sample-wise Self-Supervised Representation Quality Score (or, Q-Score) that can be computed without access to any label information.

Self-Supervised Learning

Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation

no code implementations5 Feb 2022 Wenxiao Wang, Alexander Levine, Soheil Feizi

DPA predicts through an aggregation of base classifiers trained on disjoint subsets of data, thus restricting its sensitivity to dataset distortions.

Data Poisoning

Certifying Model Accuracy under Distribution Shifts

no code implementations28 Jan 2022 Aounon Kumar, Alexander Levine, Tom Goldstein, Soheil Feizi

Certified robustness in machine learning has primarily focused on adversarial perturbations of the input with a fixed attack budget for each point in the data distribution.

A Comprehensive Study of Image Classification Model Sensitivity to Foregrounds, Backgrounds, and Visual Attributes

no code implementations CVPR 2022 Mazda Moayeri, Phillip Pope, Yogesh Balaji, Soheil Feizi

While datasets with single-label supervision have propelled rapid advances in image classification, additional annotations are necessary in order to quantitatively assess how models make predictions.

Image Classification

Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch Detection

no code implementations CVPR 2022 Jiang Liu, Alexander Levine, Chun Pong Lau, Rama Chellappa, Soheil Feizi

In addition, we design a robust shape completion algorithm, which is guaranteed to remove the entire patch from the images if the outputs of the patch segmenter are within a certain Hamming distance of the ground-truth patch masks.

object-detection Object Detection

Improving Deep Learning Interpretability by Saliency Guided Training

1 code implementation NeurIPS 2021 Aya Abdelsalam Ismail, Héctor Corrada Bravo, Soheil Feizi

In this paper, we tackle this issue and introduce a {\it saliency guided training}procedure for neural networks to reduce noisy gradients used in predictions while retaining the predictive performance of the model.

Natural Language Processing Time Series

On Hard Episodes in Meta-Learning

no code implementations21 Oct 2021 Samyadeep Basu, Amr Sharaf, Nicolo Fusi, Soheil Feizi

To address the issue of sub-par performance on hard episodes, we investigate and benchmark different meta-training strategies based on adversarial training and curriculum learning.

Meta-Learning

Salient ImageNet: How to discover spurious features in Deep Learning?

1 code implementation8 Oct 2021 Sahil Singla, Soheil Feizi

Our methodology is based on this key idea: to identify spurious or core \textit{visual features} used in model predictions, we identify spurious or core \textit{neural features} (penultimate layer neurons of a robust model) via limited human supervision (e. g., using top 5 activating images per feature).

FOCUS: Familiar Objects in Common and Uncommon Settings

1 code implementation7 Oct 2021 Priyatham Kattakinda, Soheil Feizi

Standard training datasets for deep learning often contain objects in common settings (e. g., "a horse on grass" or "a ship in water") since they are usually collected by randomly scraping the web.

Multi-Domain Self-Supervised Learning

no code implementations29 Sep 2021 Neha Mukund Kalibhat, Yogesh Balaji, C. Bayan Bruss, Soheil Feizi

In fact, training these methods on a combination of several domains often degrades the quality of learned representations compared to the models trained on a single domain.

Contrastive Learning Representation Learning +1

Causal ImageNet: How to discover spurious features in Deep Learning?

no code implementations ICLR 2022 Sahil Singla, Soheil Feizi

Focusing on image classifications, we define causal attributes as the set of visual features that are always a part of the object while spurious attributes are the ones that are likely to {\it co-occur} with the object but not a part of it (e. g., attribute ``fingers" for class ``band aid").

Sample Efficient Detection and Classification of Adversarial Attacks via Self-Supervised Embeddings

no code implementations ICCV 2021 Mazda Moayeri, Soheil Feizi

In this paper, we propose a self-supervised method to detect adversarial attacks and classify them to their respective threat models, based on a linear model operating on the embeddings from a pre-trained self-supervised encoder.

Adversarial Robustness

Improved deterministic l2 robustness on CIFAR-10 and CIFAR-100

1 code implementation ICLR 2022 Sahil Singla, Surbhi Singla, Soheil Feizi

While $1$-Lipschitz CNNs can be designed by enforcing a $1$-Lipschitz constraint on each layer, training such networks requires each layer to have an orthogonal Jacobian matrix (for all inputs) to prevent the gradients from vanishing during backpropagation.

Adversarial Robustness

Policy Smoothing for Provably Robust Reinforcement Learning

no code implementations ICLR 2022 Aounon Kumar, Alexander Levine, Soheil Feizi

Prior works in provable robustness in RL seek to certify the behaviour of the victim policy at every time-step against a non-adaptive adversary using methods developed for the static setting.

Adversarial Robustness Image Classification +1

Skew Orthogonal Convolutions

1 code implementation24 May 2021 Sahil Singla, Soheil Feizi

Then, we use the Taylor series expansion of the Jacobian exponential to construct the SOC layer that is orthogonal.

Adversarial Robustness

Understanding Overparameterization in Generative Adversarial Networks

no code implementations12 Apr 2021 Yogesh Balaji, Mohammadmahdi Sajedi, Neha Mukund Kalibhat, Mucong Ding, Dominik Stöger, Mahdi Soltanolkotabi, Soheil Feizi

We also empirically study the role of model overparameterization in GANs using several large-scale experiments on CIFAR-10 and Celeb-A datasets.

Improved, Deterministic Smoothing for L_1 Certified Robustness

1 code implementation17 Mar 2021 Alexander Levine, Soheil Feizi

To the best of our knowledge, this is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model while allowing for an arbitrary classifier (i. e., a deep model) to be used as a base classifier and without requiring an exponential number of smoothing samples.

Low Curvature Activations Reduce Overfitting in Adversarial Training

1 code implementation ICCV 2021 Vasu Singla, Sahil Singla, David Jacobs, Soheil Feizi

In particular, we show that using activation functions with low (exact or approximate) curvature values has a regularization effect that significantly reduces both the standard and robust generalization gaps in adversarial training.

Understanding Over-parameterization in Generative Adversarial Networks

no code implementations ICLR 2021 Yogesh Balaji, Mohammadmahdi Sajedi, Neha Mukund Kalibhat, Mucong Ding, Dominik Stöger, Mahdi Soltanolkotabi, Soheil Feizi

In this work, we present a comprehensive analysis of the importance of model over-parameterization in GANs both theoretically and empirically.

Fantastic Four: Differentiable and Efficient Bounds on Singular Values of Convolution Layers

no code implementations ICLR 2021 Sahil Singla, Soheil Feizi

Through experiments on MNIST and CIFAR-10, we demonstrate the effectiveness of our spectral bound in improving generalization and robustness of deep networks.

Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks

no code implementations ICLR 2021 Alexander Levine, Soheil Feizi

Against general poisoning attacks where no prior certified defenses exists, DPA can certify $\geq$ 50% of test images against over 500 poison image insertions on MNIST, and nine insertions on CIFAR-10.

Perceptual Adversarial Robustness: Generalizable Defenses Against Unforeseen Threat Models

no code implementations ICLR 2021 Cassidy Laidlaw, Sahil Singla, Soheil Feizi

We call this threat model the neural perceptual threat model (NPTM); it includes adversarial examples with a bounded neural perceptual distance (a neural network-based approximation of the true perceptual distance) to natural images.

Adversarial Defense Adversarial Robustness +1

Tight Second-Order Certificates for Randomized Smoothing

1 code implementation20 Oct 2020 Alexander Levine, Aounon Kumar, Thomas Goldstein, Soheil Feizi

In this work, we show that there also exists a universal curvature-like bound for Gaussian random smoothing: given the exact value and gradient of a smoothed function, we compute a lower bound on the distance of a point to its closest adversarial example, called the Second-order Smoothing (SoS) robustness certificate.

Robust Optimal Transport with Applications in Generative Modeling and Domain Adaptation

2 code implementations NeurIPS 2020 Yogesh Balaji, Rama Chellappa, Soheil Feizi

To remedy this issue, robust formulations of OT with unbalanced marginal constraints have previously been proposed.

Domain Adaptation

Winning Lottery Tickets in Deep Generative Models

1 code implementation5 Oct 2020 Neha Mukund Kalibhat, Yogesh Balaji, Soheil Feizi

In this paper, we confirm the existence of winning tickets in deep generative models such as GANs and VAEs.

GANs with Variational Entropy Regularizers: Applications in Mitigating the Mode-Collapse Issue

no code implementations24 Sep 2020 Pirazh Khorramshahi, Hossein Souri, Rama Chellappa, Soheil Feizi

To tackle this issue, we take an information-theoretic approach and maximize a variational lower bound on the entropy of the generated samples to increase their diversity.

Certifying Confidence via Randomized Smoothing

no code implementations NeurIPS 2020 Aounon Kumar, Alexander Levine, Soheil Feizi, Tom Goldstein

It uses the probabilities of predicting the top two most-likely classes around an input point under a smoothing distribution to generate a certified radius for a classifier's prediction.

Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks

no code implementations NeurIPS 2020 Wei-An Lin, Chun Pong Lau, Alexander Levine, Rama Chellappa, Soheil Feizi

Using OM-ImageNet, we first show that adversarial training in the latent space of images improves both standard accuracy and robustness to on-manifold attacks.

Adversarial Robustness

Deep Partition Aggregation: Provable Defense against General Poisoning Attacks

no code implementations26 Jun 2020 Alexander Levine, Soheil Feizi

Our defense against label-flipping attacks, SS-DPA, uses a semi-supervised learning algorithm as its base classifier model: each base classifier is trained using the entire unlabeled training set in addition to the labels for a partition.

Influence Functions in Deep Learning Are Fragile

no code implementations ICLR 2021 Samyadeep Basu, Philip Pope, Soheil Feizi

Influence functions approximate the effect of training samples in test-time predictions and have a wide variety of applications in machine learning interpretability and uncertainty estimation.

Perceptual Adversarial Robustness: Defense Against Unseen Threat Models

1 code implementation22 Jun 2020 Cassidy Laidlaw, Sahil Singla, Soheil Feizi

We call this threat model the neural perceptual threat model (NPTM); it includes adversarial examples with a bounded neural perceptual distance (a neural network-based approximation of the true perceptual distance) to natural images.

Adversarial Defense Adversarial Robustness +1

Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning

1 code implementation17 Jun 2020 Vedant Nanda, Samuel Dooley, Sahil Singla, Soheil Feizi, John P. Dickerson

In this paper, we argue that traditional notions of fairness that are only based on models' outputs are not sufficient when the model is vulnerable to adversarial attacks.

Decision Making Face Recognition +1

Second-Order Provable Defenses against Adversarial Attacks

no code implementations ICML 2020 Sahil Singla, Soheil Feizi

Second, we derive a computationally-efficient differentiable upper bound on the curvature of a deep network.

GANs with Conditional Independence Graphs: On Subadditivity of Probability Divergences

no code implementations2 Mar 2020 Mucong Ding, Constantinos Daskalakis, Soheil Feizi

GANs, however, are designed in a model-free fashion where no additional information about the underlying distribution is available.

Image-to-Image Translation Time Series

(De)Randomized Smoothing for Certifiable Defense against Patch Attacks

1 code implementation NeurIPS 2020 Alexander Levine, Soheil Feizi

In this paper, we introduce a certifiable defense against patch attacks that guarantees for a given image and patch attack size, no patch adversarial examples exist.

Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness

1 code implementation ICML 2020 Aounon Kumar, Alexander Levine, Tom Goldstein, Soheil Feizi

Notably, for $p \geq 2$, this dependence on $d$ is no better than that of the $\ell_p$-radius that can be certified using isotropic Gaussian smoothing, essentially putting a matching lower bound on the robustness radius.

Playing it Safe: Adversarial Robustness with an Abstain Option

no code implementations25 Nov 2019 Cassidy Laidlaw, Soheil Feizi

We explore adversarial robustness in the setting in which it is acceptable for a classifier to abstain---that is, output no class---on adversarial examples.

Adversarial Robustness

Fantastic Four: Differentiable Bounds on Singular Values of Convolution Layers

1 code implementation22 Nov 2019 Sahil Singla, Soheil Feizi

Through experiments on MNIST and CIFAR-10, we demonstrate the effectiveness of our spectral bound in improving generalization and provable robustness of deep networks.

Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation

1 code implementation21 Nov 2019 Alexander Levine, Soheil Feizi

This is comparable to the observed empirical robustness of unprotected classifiers on MNIST to modern L_0 attacks, demonstrating the tightness of the proposed robustness certificate.

Robust classification

Adversarial Robustness of Flow-Based Generative Models

no code implementations20 Nov 2019 Phillip Pope, Yogesh Balaji, Soheil Feizi

Finally, using a hybrid adversarial training procedure, we significantly boost the robustness of these generative models.

Adversarial Robustness

On Second-Order Group Influence Functions for Black-Box Predictions

no code implementations ICML 2020 Samyadeep Basu, Xuchen You, Soheil Feizi

Often we want to identify an influential group of training samples in a particular test prediction for a given machine learning model.

BIG-bench Machine Learning

Quantum Wasserstein Generative Adversarial Networks

1 code implementation NeurIPS 2019 Shouvanik Chakrabarti, Yiming Huang, Tongyang Li, Soheil Feizi, Xiaodi Wu

The study of quantum generative models is well-motivated, not only because of its importance in quantum machine learning and quantum chemistry but also because of the perspective of its implementation on near-term quantum machines.

Wasserstein Smoothing: Certified Robustness against Wasserstein Adversarial Attacks

no code implementations23 Oct 2019 Alexander Levine, Soheil Feizi

An example of an attack method based on a non-additive threat model is the Wasserstein adversarial attack proposed by Wong et al. (2019), where the distance between an image and its adversarial example is determined by the Wasserstein metric ("earth-mover distance") between their normalized pixel intensities.

Adversarial Attack Image Classification

Deep k-NN Defense against Clean-label Data Poisoning Attacks

1 code implementation29 Sep 2019 Neehar Peri, Neal Gupta, W. Ronny Huang, Liam Fowl, Chen Zhu, Soheil Feizi, Tom Goldstein, John P. Dickerson

Targeted clean-label data poisoning is a type of adversarial attack on machine learning systems in which an adversary injects a few correctly-labeled, minimally-perturbed samples into the training data, causing a model to misclassify a particular test sample during inference.

Adversarial Attack Data Poisoning

Curvature-based Robustness Certificates against Adversarial Examples

no code implementations25 Sep 2019 Sahil Singla, Soheil Feizi

We also use the curvature bound as a regularization term during the training of the network to boost its certified robustness against adversarial examples.

Interpretable Adversarial Training for Text

no code implementations30 May 2019 Samuel Barham, Soheil Feizi

SPGD imposes a directional regularization constraint on input perturbations by projecting them onto the directions to nearby word embeddings with highest cosine similarities.

Word Embeddings

Functional Adversarial Attacks

1 code implementation NeurIPS 2019 Cassidy Laidlaw, Soheil Feizi

For simplicity, we refer to functional adversarial attacks on image colors as ReColorAdv, which is the main focus of our experiments.

Adversarial Attack

Certifiably Robust Interpretation in Deep Learning

no code implementations28 May 2019 Alexander Levine, Sahil Singla, Soheil Feizi

Deep learning interpretation is essential to explain the reasoning behind model predictions.

Adversarially Robust Distillation

2 code implementations23 May 2019 Micah Goldblum, Liam Fowl, Soheil Feizi, Tom Goldstein

In addition to producing small models with high test accuracy like conventional distillation, ARD also passes the superior robustness of large networks onto the student.

Adversarial Robustness Knowledge Distillation

Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation

1 code implementation1 Feb 2019 Sahil Singla, Eric Wallace, Shi Feng, Soheil Feizi

Second, we compute the importance of group-features in deep learning interpretation by introducing a sparsity regularization term.

Feature Importance General Classification

Normalized Wasserstein Distance for Mixture Distributions with Applications in Adversarial Learning and Domain Adaptation

1 code implementation1 Feb 2019 Yogesh Balaji, Rama Chellappa, Soheil Feizi

Using the proposed normalized Wasserstein measure leads to significant performance gains for mixture distributions with imbalanced mixture proportions compared to the vanilla Wasserstein distance.

Domain Adaptation

Robustness Certificates Against Adversarial Examples for ReLU Networks

no code implementations1 Feb 2019 Sahil Singla, Soheil Feizi

These robustness certificates leverage the piece-wise linear structure of ReLU networks and use the fact that in a polyhedron around a given sample, the prediction function is linear.

General Classification Multi-Label Classification

Porcupine Neural Networks: Approximating Neural Network Landscapes

no code implementations NeurIPS 2018 Soheil Feizi, Hamid Javadi, Jesse Zhang, David Tse

Neural networks have been used prominently in several machine learning and statistics applications.

Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs

1 code implementation ICLR 2019 Yogesh Balaji, Hamed Hassani, Rama Chellappa, Soheil Feizi

Building on the success of deep learning, two modern approaches to learn a probability model from the data are Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs).

Are adversarial examples inevitable?

no code implementations ICLR 2019 Ali Shafahi, W. Ronny Huang, Christoph Studer, Soheil Feizi, Tom Goldstein

Using experiments, we explore the implications of theoretical guarantees for real-world problems and discuss how factors such as dimensionality and image complexity limit a classifier's robustness against adversarial examples.

Tensor Biclustering

1 code implementation NeurIPS 2017 Soheil Feizi, Hamid Javadi, David Tse

Consider a dataset where data is collected on multiple features of multiple individuals over multiple times.

Understanding GANs: the LQG Setting

no code implementations ICLR 2018 Soheil Feizi, Farzan Farnia, Tony Ginart, David Tse

Generative Adversarial Networks (GANs) have become a popular method to learn a probability model from data.

Porcupine Neural Networks: (Almost) All Local Optima are Global

1 code implementation5 Oct 2017 Soheil Feizi, Hamid Javadi, Jesse Zhang, David Tse

Neural networks have been used prominently in several machine learning and statistics applications.

Maximally Correlated Principal Component Analysis

no code implementations17 Feb 2017 Soheil Feizi, David Tse

For jointly Gaussian variables we show that the covariance matrix corresponding to the identity (or the negative of the identity) transformations majorizes covariance matrices of non-identity functions.

Dimensionality Reduction

Network Maximal Correlation

no code implementations15 Jun 2016 Soheil Feizi, Ali Makhdoumi, Ken Duffy, Muriel Medard, Manolis Kellis

For jointly Gaussian variables, we show that under some conditions the NMC optimization is an instance of the Max-Cut problem.

graph partitioning

Maximum Likelihood Latent Space Embedding of Logistic Random Dot Product Graphs

no code implementations3 Oct 2015 Luke O'Connor, Muriel Médard, Soheil Feizi

A latent space model of particular interest is the Random Dot Product Graph (RDPG), which can be fit using an efficient spectral method; however, this method is based on a heuristic that can fail, even in simple cases.

Biclustering Using Message Passing

no code implementations NeurIPS 2014 Luke O'Connor, Soheil Feizi

Biclustering is the analog of clustering on a bipartite graph.

Cannot find the paper you are looking for? You can Submit a new open access paper.