You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 5 Aug 2022 • Wenxiao Wang, Alexander Levine, Soheil Feizi

Deep Partition Aggregation (DPA) and its extension, Finite Aggregation (FA) are recent approaches for provable defenses against data poisoning, where they predict through the majority vote of many base models trained from different subsets of training set using a given learner.

no code implementations • 21 Jun 2022 • Yanchao Sun, Ruijie Zheng, Parisa Hassanzadeh, Yongyuan Liang, Soheil Feizi, Sumitra Ganesh, Furong Huang

Communication is important in many multi-agent reinforcement learning (MARL) problems for agents to share information and make good decisions.

no code implementations • 5 Jun 2022 • Aya Abdelsalam Ismail, Sercan Ö. Arik, Jinsung Yoon, Ankur Taly, Soheil Feizi, Tomas Pfister

We introduce a novel framework, Interpretable Mixture of Experts (IME), that provides interpretability for structured data while preserving accuracy.

no code implementations • 28 Mar 2022 • Sahil Singla, Mazda Moayeri, Soheil Feizi

Deep neural networks can be unreliable in the real world especially when they heavily use spurious features for their predictions.

1 code implementation • 16 Mar 2022 • Alexander Levine, Soheil Feizi

Our approach builds on a recent work, Levine and Feizi (2021), which provides a provable defense against L_1 attacks.

no code implementations • 3 Mar 2022 • Neha Kalibhat, Kanika Narang, Liang Tan, Hamed Firooz, Maziar Sanjabi, Soheil Feizi

Next, we propose a sample-wise Self-Supervised Representation Quality Score (or, Q-Score) that can be computed without access to any label information.

no code implementations • 5 Feb 2022 • Wenxiao Wang, Alexander Levine, Soheil Feizi

DPA predicts through an aggregation of base classifiers trained on disjoint subsets of data, thus restricting its sensitivity to dataset distortions.

no code implementations • 28 Jan 2022 • Aounon Kumar, Alexander Levine, Tom Goldstein, Soheil Feizi

Certified robustness in machine learning has primarily focused on adversarial perturbations of the input with a fixed attack budget for each point in the data distribution.

no code implementations • CVPR 2022 • Mazda Moayeri, Phillip Pope, Yogesh Balaji, Soheil Feizi

While datasets with single-label supervision have propelled rapid advances in image classification, additional annotations are necessary in order to quantitatively assess how models make predictions.

no code implementations • 12 Dec 2021 • Chun Pong Lau, Jiang Liu, Hossein Souri, Wei-An Lin, Soheil Feizi, Rama Chellappa

Under JSTM, we develop novel adversarial attacks and defenses.

no code implementations • 9 Dec 2021 • Jiang Liu, Chun Pong Lau, Hossein Souri, Soheil Feizi, Rama Chellappa

In other words, we can make a weak model more robust with the help of a strong teacher model.

no code implementations • CVPR 2022 • Jiang Liu, Alexander Levine, Chun Pong Lau, Rama Chellappa, Soheil Feizi

In addition, we design a robust shape completion algorithm, which is guaranteed to remove the entire patch from the images if the outputs of the patch segmenter are within a certain Hamming distance of the ground-truth patch masks.

1 code implementation • NeurIPS 2021 • Aya Abdelsalam Ismail, Héctor Corrada Bravo, Soheil Feizi

In this paper, we tackle this issue and introduce a {\it saliency guided training}procedure for neural networks to reduce noisy gradients used in predictions while retaining the predictive performance of the model.

no code implementations • 21 Oct 2021 • Samyadeep Basu, Amr Sharaf, Nicolo Fusi, Soheil Feizi

To address the issue of sub-par performance on hard episodes, we investigate and benchmark different meta-training strategies based on adversarial training and curriculum learning.

1 code implementation • 8 Oct 2021 • Sahil Singla, Soheil Feizi

Our methodology is based on this key idea: to identify spurious or core \textit{visual features} used in model predictions, we identify spurious or core \textit{neural features} (penultimate layer neurons of a robust model) via limited human supervision (e. g., using top 5 activating images per feature).

1 code implementation • 7 Oct 2021 • Priyatham Kattakinda, Soheil Feizi

Standard training datasets for deep learning often contain objects in common settings (e. g., "a horse on grass" or "a ship in water") since they are usually collected by randomly scraping the web.

no code implementations • 29 Sep 2021 • Neha Mukund Kalibhat, Yogesh Balaji, C. Bayan Bruss, Soheil Feizi

In fact, training these methods on a combination of several domains often degrades the quality of learned representations compared to the models trained on a single domain.

no code implementations • ICLR 2022 • Sahil Singla, Soheil Feizi

Focusing on image classifications, we define causal attributes as the set of visual features that are always a part of the object while spurious attributes are the ones that are likely to {\it co-occur} with the object but not a part of it (e. g., attribute ``fingers" for class ``band aid").

no code implementations • ICCV 2021 • Mazda Moayeri, Soheil Feizi

In this paper, we propose a self-supervised method to detect adversarial attacks and classify them to their respective threat models, based on a linear model operating on the embeddings from a pre-trained self-supervised encoder.

1 code implementation • ICLR 2022 • Sahil Singla, Surbhi Singla, Soheil Feizi

While $1$-Lipschitz CNNs can be designed by enforcing a $1$-Lipschitz constraint on each layer, training such networks requires each layer to have an orthogonal Jacobian matrix (for all inputs) to prevent the gradients from vanishing during backpropagation.

no code implementations • ICLR 2022 • Aounon Kumar, Alexander Levine, Soheil Feizi

Prior works in provable robustness in RL seek to certify the behaviour of the victim policy at every time-step against a non-adaptive adversary using methods developed for the static setting.

1 code implementation • 24 May 2021 • Sahil Singla, Soheil Feizi

Then, we use the Taylor series expansion of the Jacobian exponential to construct the SOC layer that is orthogonal.

no code implementations • 12 Apr 2021 • Yogesh Balaji, Mohammadmahdi Sajedi, Neha Mukund Kalibhat, Mucong Ding, Dominik Stöger, Mahdi Soltanolkotabi, Soheil Feizi

We also empirically study the role of model overparameterization in GANs using several large-scale experiments on CIFAR-10 and Celeb-A datasets.

1 code implementation • 17 Mar 2021 • Alexander Levine, Soheil Feizi

To the best of our knowledge, this is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model while allowing for an arbitrary classifier (i. e., a deep model) to be used as a base classifier and without requiring an exponential number of smoothing samples.

1 code implementation • ICCV 2021 • Vasu Singla, Sahil Singla, David Jacobs, Soheil Feizi

In particular, we show that using activation functions with low (exact or approximate) curvature values has a regularization effect that significantly reduces both the standard and robust generalization gaps in adversarial training.

no code implementations • ICLR 2021 • Yogesh Balaji, Mohammadmahdi Sajedi, Neha Mukund Kalibhat, Mucong Ding, Dominik Stöger, Mahdi Soltanolkotabi, Soheil Feizi

In this work, we present a comprehensive analysis of the importance of model over-parameterization in GANs both theoretically and empirically.

no code implementations • ICLR 2021 • Sahil Singla, Soheil Feizi

Through experiments on MNIST and CIFAR-10, we demonstrate the effectiveness of our spectral bound in improving generalization and robustness of deep networks.

no code implementations • ICLR 2021 • Alexander Levine, Soheil Feizi

Against general poisoning attacks where no prior certified defenses exists, DPA can certify $\geq$ 50% of test images against over 500 poison image insertions on MNIST, and nine insertions on CIFAR-10.

no code implementations • ICLR 2021 • Cassidy Laidlaw, Sahil Singla, Soheil Feizi

We call this threat model the neural perceptual threat model (NPTM); it includes adversarial examples with a bounded neural perceptual distance (a neural network-based approximation of the true perceptual distance) to natural images.

1 code implementation • NeurIPS 2020 • Aya Abdelsalam Ismail, Mohamed Gunady, Héctor Corrada Bravo, Soheil Feizi

Saliency methods are used extensively to highlight the importance of input features in model predictions.

1 code implementation • 20 Oct 2020 • Alexander Levine, Aounon Kumar, Thomas Goldstein, Soheil Feizi

In this work, we show that there also exists a universal curvature-like bound for Gaussian random smoothing: given the exact value and gradient of a smoothed function, we compute a lower bound on the distance of a point to its closest adversarial example, called the Second-order Smoothing (SoS) robustness certificate.

2 code implementations • NeurIPS 2020 • Yogesh Balaji, Rama Chellappa, Soheil Feizi

To remedy this issue, robust formulations of OT with unbalanced marginal constraints have previously been proposed.

1 code implementation • 5 Oct 2020 • Neha Mukund Kalibhat, Yogesh Balaji, Soheil Feizi

In this paper, we confirm the existence of winning tickets in deep generative models such as GANs and VAEs.

no code implementations • 24 Sep 2020 • Pirazh Khorramshahi, Hossein Souri, Rama Chellappa, Soheil Feizi

To tackle this issue, we take an information-theoretic approach and maximize a variational lower bound on the entropy of the generated samples to increase their diversity.

no code implementations • NeurIPS 2020 • Aounon Kumar, Alexander Levine, Soheil Feizi, Tom Goldstein

It uses the probabilities of predicting the top two most-likely classes around an input point under a smoothing distribution to generate a certified radius for a classifier's prediction.

no code implementations • NeurIPS 2020 • Wei-An Lin, Chun Pong Lau, Alexander Levine, Rama Chellappa, Soheil Feizi

Using OM-ImageNet, we first show that adversarial training in the latent space of images improves both standard accuracy and robustness to on-manifold attacks.

no code implementations • 26 Jun 2020 • Alexander Levine, Soheil Feizi

Our defense against label-flipping attacks, SS-DPA, uses a semi-supervised learning algorithm as its base classifier model: each base classifier is trained using the entire unlabeled training set in addition to the labels for a partition.

no code implementations • ICLR 2021 • Samyadeep Basu, Philip Pope, Soheil Feizi

Influence functions approximate the effect of training samples in test-time predictions and have a wide variety of applications in machine learning interpretability and uncertainty estimation.

1 code implementation • 22 Jun 2020 • Cassidy Laidlaw, Sahil Singla, Soheil Feizi

We call this threat model the neural perceptual threat model (NPTM); it includes adversarial examples with a bounded neural perceptual distance (a neural network-based approximation of the true perceptual distance) to natural images.

1 code implementation • 17 Jun 2020 • Vedant Nanda, Samuel Dooley, Sahil Singla, Soheil Feizi, John P. Dickerson

In this paper, we argue that traditional notions of fairness that are only based on models' outputs are not sufficient when the model is vulnerable to adversarial attacks.

no code implementations • ICML 2020 • Sahil Singla, Soheil Feizi

Second, we derive a computationally-efficient differentiable upper bound on the curvature of a deep network.

1 code implementation • 24 Mar 2020 • Gowthami Somepalli, Yexin Wu, Yogesh Balaji, Bhanukiran Vinzamuri, Soheil Feizi

Detecting out of distribution (OOD) samples is of paramount importance in all Machine Learning applications.

no code implementations • 2 Mar 2020 • Mucong Ding, Constantinos Daskalakis, Soheil Feizi

GANs, however, are designed in a model-free fashion where no additional information about the underlying distribution is available.

1 code implementation • NeurIPS 2020 • Alexander Levine, Soheil Feizi

In this paper, we introduce a certifiable defense against patch attacks that guarantees for a given image and patch attack size, no patch adversarial examples exist.

1 code implementation • ICML 2020 • Aounon Kumar, Alexander Levine, Tom Goldstein, Soheil Feizi

Notably, for $p \geq 2$, this dependence on $d$ is no better than that of the $\ell_p$-radius that can be certified using isotropic Gaussian smoothing, essentially putting a matching lower bound on the robustness radius.

no code implementations • 25 Nov 2019 • Cassidy Laidlaw, Soheil Feizi

We explore adversarial robustness in the setting in which it is acceptable for a classifier to abstain---that is, output no class---on adversarial examples.

1 code implementation • 22 Nov 2019 • Sahil Singla, Soheil Feizi

Through experiments on MNIST and CIFAR-10, we demonstrate the effectiveness of our spectral bound in improving generalization and provable robustness of deep networks.

1 code implementation • 21 Nov 2019 • Alexander Levine, Soheil Feizi

This is comparable to the observed empirical robustness of unprotected classifiers on MNIST to modern L_0 attacks, demonstrating the tightness of the proposed robustness certificate.

no code implementations • 20 Nov 2019 • Phillip Pope, Yogesh Balaji, Soheil Feizi

Finally, using a hybrid adversarial training procedure, we significantly boost the robustness of these generative models.

no code implementations • ICML 2020 • Samyadeep Basu, Xuchen You, Soheil Feizi

Often we want to identify an influential group of training samples in a particular test prediction for a given machine learning model.

1 code implementation • NeurIPS 2019 • Shouvanik Chakrabarti, Yiming Huang, Tongyang Li, Soheil Feizi, Xiaodi Wu

The study of quantum generative models is well-motivated, not only because of its importance in quantum machine learning and quantum chemistry but also because of the perspective of its implementation on near-term quantum machines.

no code implementations • 23 Oct 2019 • Alexander Levine, Soheil Feizi

An example of an attack method based on a non-additive threat model is the Wasserstein adversarial attack proposed by Wong et al. (2019), where the distance between an image and its adversarial example is determined by the Wasserstein metric ("earth-mover distance") between their normalized pixel intensities.

1 code implementation • 29 Sep 2019 • Neehar Peri, Neal Gupta, W. Ronny Huang, Liam Fowl, Chen Zhu, Soheil Feizi, Tom Goldstein, John P. Dickerson

Targeted clean-label data poisoning is a type of adversarial attack on machine learning systems in which an adversary injects a few correctly-labeled, minimally-perturbed samples into the training data, causing a model to misclassify a particular test sample during inference.

no code implementations • 25 Sep 2019 • Sahil Singla, Soheil Feizi

We also use the curvature bound as a regularization term during the training of the network to boost its certified robustness against adversarial examples.

no code implementations • 30 May 2019 • Samuel Barham, Soheil Feizi

SPGD imposes a directional regularization constraint on input perturbations by projecting them onto the directions to nearby word embeddings with highest cosine similarities.

1 code implementation • NeurIPS 2019 • Cassidy Laidlaw, Soheil Feizi

For simplicity, we refer to functional adversarial attacks on image colors as ReColorAdv, which is the main focus of our experiments.

no code implementations • 28 May 2019 • Alexander Levine, Sahil Singla, Soheil Feizi

Deep learning interpretation is essential to explain the reasoning behind model predictions.

2 code implementations • 23 May 2019 • Micah Goldblum, Liam Fowl, Soheil Feizi, Tom Goldstein

In addition to producing small models with high test accuracy like conventional distillation, ARD also passes the superior robustness of large networks onto the student.

1 code implementation • 1 Feb 2019 • Sahil Singla, Eric Wallace, Shi Feng, Soheil Feizi

Second, we compute the importance of group-features in deep learning interpretation by introducing a sparsity regularization term.

1 code implementation • 1 Feb 2019 • Yogesh Balaji, Rama Chellappa, Soheil Feizi

Using the proposed normalized Wasserstein measure leads to significant performance gains for mixture distributions with imbalanced mixture proportions compared to the vanilla Wasserstein distance.

no code implementations • 1 Feb 2019 • Sahil Singla, Soheil Feizi

These robustness certificates leverage the piece-wise linear structure of ReLU networks and use the fact that in a polyhedron around a given sample, the prediction function is linear.

no code implementations • 1 Feb 2019 • Angeline Aguinaldo, Ping-Yeh Chiang, Alex Gain, Ameya Patil, Kolten Pearson, Soheil Feizi

From our experiments, we observe a qualitative limit for GAN's compression.

no code implementations • NeurIPS 2018 • Soheil Feizi, Hamid Javadi, Jesse Zhang, David Tse

Neural networks have been used prominently in several machine learning and statistics applications.

1 code implementation • ICLR 2019 • Yogesh Balaji, Hamed Hassani, Rama Chellappa, Soheil Feizi

Building on the success of deep learning, two modern approaches to learn a probability model from the data are Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs).

no code implementations • ICLR 2019 • Ali Shafahi, W. Ronny Huang, Christoph Studer, Soheil Feizi, Tom Goldstein

Using experiments, we explore the implications of theoretical guarantees for real-world problems and discuss how factors such as dimensionality and image complexity limit a classifier's robustness against adversarial examples.

1 code implementation • NeurIPS 2017 • Soheil Feizi, Hamid Javadi, David Tse

Consider a dataset where data is collected on multiple features of multiple individuals over multiple times.

no code implementations • ICLR 2018 • Soheil Feizi, Farzan Farnia, Tony Ginart, David Tse

Generative Adversarial Networks (GANs) have become a popular method to learn a probability model from data.

1 code implementation • 5 Oct 2017 • Soheil Feizi, Hamid Javadi, Jesse Zhang, David Tse

Neural networks have been used prominently in several machine learning and statistics applications.

no code implementations • 17 Feb 2017 • Soheil Feizi, David Tse

For jointly Gaussian variables we show that the covariance matrix corresponding to the identity (or the negative of the identity) transformations majorizes covariance matrices of non-identity functions.

no code implementations • 15 Jun 2016 • Soheil Feizi, Ali Makhdoumi, Ken Duffy, Muriel Medard, Manolis Kellis

For jointly Gaussian variables, we show that under some conditions the NMC optimization is an instance of the Max-Cut problem.

no code implementations • 3 Oct 2015 • Luke O'Connor, Muriel Médard, Soheil Feizi

A latent space model of particular interest is the Random Dot Product Graph (RDPG), which can be fit using an efficient spectral method; however, this method is based on a heuristic that can fail, even in simple cases.

no code implementations • NeurIPS 2014 • Luke O'Connor, Soheil Feizi

Biclustering is the analog of clustering on a bipartite graph.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.