1 code implementation • 31 Oct 2024 • Keivan Rezaei, Khyathi Chandu, Soheil Feizi, Yejin Choi, Faeze Brahman, Abhilasha Ravichander
Large language models trained on web-scale corpora can memorize undesirable datapoints such as incorrect facts, copyrighted content or sensitive data.
no code implementations • 17 Oct 2024 • Mazda Moayeri, Vidhisha Balachandran, Varun Chandrasekaran, Safoora Yousefi, Thomas Fel, Soheil Feizi, Besmira Nushi, Neel Joshi, Vibhav Vineet
With models getting stronger, evaluations have grown more complex, testing multiple skills in one benchmark and even in the same instance at once.
no code implementations • 19 Jun 2024 • Soumya Suvra Ghosal, Samyadeep Basu, Soheil Feizi, Dinesh Manocha
Notably, in a 16-shot setup, IntCoOp improves CoOp by 7. 35% in average performance across 10 diverse datasets.
no code implementations • 17 Jun 2024 • Donghyeon Joo, Ramyad Hadidi, Soheil Feizi, Bahar Asgari
In our case study of offloaded inference, we found that due to the low bandwidth between storage devices and GPU, the latency of transferring large model weights from its offloaded location to GPU memory becomes the critical bottleneck with actual compute taking nearly 0% of runtime.
1 code implementation • 12 Jun 2024 • Arman Zarei, Keivan Rezaei, Samyadeep Basu, Mehrdad Saberi, Mazda Moayeri, Priyatham Kattakinda, Soheil Feizi
We also show that re-weighting the erroneous attention contributions in CLIP can also lead to improved compositional performances, however these improvements are often less significant than those achieved by solely learning a linear projection head, highlighting erroneous attentions to be only a minor error source.
no code implementations • 6 Jun 2024 • Samyadeep Basu, Martin Grayson, Cecily Morrison, Besmira Nushi, Soheil Feizi, Daniela Massiceti
Understanding the mechanisms of information storage and transfer in Transformer-based models is important for driving model understanding progress.
1 code implementation • 5 Jun 2024 • Mehrdad Saberi, Vinu Sankar Sadasivan, Arman Zarei, Hessam Mahdavifar, Soheil Feizi
Identifying the origin of data is crucial for data provenance, with applications including data ownership protection, media forensics, and detecting AI-generated content.
1 code implementation • 4 Jun 2024 • Prajwal Singhania, Siddharth Singh, Shwai He, Soheil Feizi, Abhinav Bhatele
Inference on large language models (LLMs) can be expensive in terms of the compute and memory costs involved, especially when long sequence lengths are used.
1 code implementation • 3 Jun 2024 • Sriram Balasubramanian, Samyadeep Basu, Soheil Feizi
To this end, we introduce a general framework which can identify the roles of various components in ViTs beyond CLIP.
no code implementations • 26 May 2024 • Neha Kalibhat, Priyatham Kattakinda, Arman Zarei, Nikita Seleznev, Samuel Sharpe, Senthil Kumar, Soheil Feizi
Vision transformers have established a precedent of patchifying images into uniformly-sized chunks before processing.
no code implementations • 21 May 2024 • Mihai Christodorescu, Ryan Craven, Soheil Feizi, Neil Gong, Mia Hoffmann, Somesh Jha, Zhengyuan Jiang, Mehrdad Saberi Kamarposhti, John Mitchell, Jessica Newman, Emelia Probasco, Yanjun Qi, Khawaja Shams, Matthew Turek
The interplay between legislation and technology is a very vast topic, and we don't claim that this paper is a comprehensive treatment on this topic.
1 code implementation • 2 May 2024 • Samyadeep Basu, Keivan Rezaei, Priyatham Kattakinda, Ryan Rossi, Cherry Zhao, Vlad Morariu, Varun Manjunatha, Soheil Feizi
To address this issue, we introduce the concept of Mechanistic Localization in text-to-image models, where knowledge about various visual attributes (e. g., "style", "objects", "facts") can be mechanistically localized to a small fraction of layers in the UNet, thus facilitating efficient model editing.
no code implementations • 11 Apr 2024 • Mazda Moayeri, Samyadeep Basu, Sriram Balasubramanian, Priyatham Kattakinda, Atoosa Chengini, Robert Brauneis, Soheil Feizi
Recent text-to-image generative models such as Stable Diffusion are extremely adept at mimicking and generating copyrighted content, raising concerns amongst artists that their unique styles may be improperly copied.
1 code implementation • 5 Mar 2024 • Hamid Kazemi, Atoosa Chegini, Jonas Geiping, Soheil Feizi, Tom Goldstein
We employ an inversion-based approach to examine CLIP models.
1 code implementation • 23 Feb 2024 • Vinu Sankar Sadasivan, Shoumik Saha, Gaurang Sriramanan, Priyatham Kattakinda, Atoosa Chegini, Soheil Feizi
Through human evaluations, we find that our untargeted attack causes Vicuna-7B-v1. 5 to produce ~15% more incorrect outputs when compared to LM outputs in the absence of our attack.
no code implementations • 9 Dec 2023 • Atoosa Chegini, Soheil Feizi
One common reason for these failures is the occurrence of objects in backgrounds that are rarely seen during training.
no code implementations • 27 Nov 2023 • Jiang Liu, Chen Wei, Yuxiang Guo, Heng Yu, Alan Yuille, Soheil Feizi, Chun Pong Lau, Rama Chellappa
We propose Instruct2Attack (I2A), a language-guided semantic attack that generates semantically meaningful perturbations according to free-form language instructions.
no code implementations • 11 Nov 2023 • Soheil Feizi, Mohammadtaghi Hajiaghayi, Keivan Rezaei, Suho Shin
This paper explores the potential for leveraging Large Language Models (LLM) in the realm of online advertising systems.
1 code implementation • NeurIPS 2023 • Sriram Balasubramanian, Gaurang Sriramanan, Vinu Sankar Sadasivan, Soheil Feizi
We further observe that the source image is linearly connected by a high-confidence path to these inputs, uncovering a star-like structure for level sets of deep networks.
no code implementations • 20 Oct 2023 • Samyadeep Basu, Nanxuan Zhao, Vlad Morariu, Soheil Feizi, Varun Manjunatha
We adapt Causal Mediation Analysis for text-to-image models and trace knowledge about distinct visual attributes to various (causal) components in the (i) UNet and (ii) text-encoder of the diffusion model.
no code implementations • 3 Oct 2023 • Samyadeep Basu, Mehrdad Saberi, Shweta Bhardwaj, Atoosa Malemir Chegini, Daniela Massiceti, Maziar Sanjabi, Shell Xu Hu, Soheil Feizi
From both the human study and automated evaluation, we find that: (i) Instruct-Pix2Pix, Null-Text and SINE are the top-performing methods averaged across different edit types, however {\it only} Instruct-Pix2Pix and Null-Text are able to preserve original image properties; (ii) Most of the editing methods fail at edits involving spatial operations (e. g., changing the position of an object).
1 code implementation • 29 Sep 2023 • Mehrdad Saberi, Vinu Sankar Sadasivan, Keivan Rezaei, Aounon Kumar, Atoosa Chegini, Wenxiao Wang, Soheil Feizi
Moreover, we show that watermarking methods are vulnerable to spoofing attacks where the attacker aims to have real images identified as watermarked ones, damaging the reputation of the developers.
no code implementations • 29 Sep 2023 • Keivan Rezaei, Mehrdad Saberi, Mazda Moayeri, Soheil Feizi
To improve on these shortcomings, we propose a novel approach that prioritizes interpretability in this problem: we start by obtaining human-understandable concepts (tags) of images in the dataset and then analyze the model's behavior based on the presence or absence of combinations of these tags.
no code implementations • 7 Sep 2023 • Neha Kalibhat, Sam Sharpe, Jeremy Goodsitt, Bayan Bruss, Soheil Feizi
Current state-of-the-art self-supervised approaches, are effective when trained on individual domains but show limited generalization on unseen domains.
1 code implementation • 6 Sep 2023 • Aounon Kumar, Chirag Agarwal, Suraj Srinivas, Aaron Jiaxun Li, Soheil Feizi, Himabindu Lakkaraju
We defend against three attack modes: i) adversarial suffix, where an adversarial sequence is appended at the end of a harmful prompt; ii) adversarial insertion, where the adversarial sequence is inserted anywhere in the middle of the prompt; and iii) adversarial infusion, where adversarial tokens are inserted at arbitrary positions in the prompt, not necessarily as a contiguous block.
no code implementations • 28 Aug 2023 • Clark Barrett, Brad Boyd, Elie Burzstein, Nicholas Carlini, Brad Chen, Jihye Choi, Amrita Roy Chowdhury, Mihai Christodorescu, Anupam Datta, Soheil Feizi, Kathleen Fisher, Tatsunori Hashimoto, Dan Hendrycks, Somesh Jha, Daniel Kang, Florian Kerschbaum, Eric Mitchell, John Mitchell, Zulfikar Ramzan, Khawaja Shams, Dawn Song, Ankur Taly, Diyi Yang
However, GenAI can be used just as well by attackers to generate new attacks and increase the velocity and efficacy of existing attacks.
1 code implementation • 20 Jul 2023 • Neha Kalibhat, Shweta Bhardwaj, Bayan Bruss, Hamed Firooz, Maziar Sanjabi, Soheil Feizi
Although many existing approaches interpret features independently, we observe in state-of-the-art self-supervised and supervised models, that less than 20% of the representation space can be explained by individual features.
no code implementations • 18 Jul 2023 • Samyadeep Basu, Shell Xu Hu, Maziar Sanjabi, Daniela Massiceti, Soheil Feizi
This work underscores the potential of well-designed distillation objectives from generative models to enhance contrastive image-text models with improved visio-linguistic reasoning capabilities.
no code implementations • 28 Jun 2023 • Wenxiao Wang, Soheil Feizi
The increasing access to data poses both opportunities and risks in deep learning, as one can manipulate the behaviors of deep learning models with malicious training samples.
1 code implementation • 10 May 2023 • Mazda Moayeri, Keivan Rezaei, Maziar Sanjabi, Soheil Feizi
We observe that the mapping between an image's representation in one model to its representation in another can be learned surprisingly well with just a linear layer, even across diverse models.
no code implementations • 4 Apr 2023 • Samyadeep Basu, Daniela Massiceti, Shell Xu Hu, Soheil Feizi
Through our controlled empirical study, we have two main findings: (i) Fine-tuning just the LayerNorm parameters (which we call LN-Tune) during few-shot adaptation is an extremely strong baseline across ViTs pre-trained with both self-supervised and supervised objectives, (ii) For self-supervised ViTs, we find that simply learning a set of scaling parameters for each attention matrix (which we call AttnScale) along with a domain-residual adapter (DRA) module leads to state-of-the-art performance (while being $\sim\!$ 9$\times$ more parameter-efficient) on MD.
Few-Shot Image Classification parameter-efficient fine-tuning
no code implementations • 28 Mar 2023 • Aounon Kumar, Vinu Sankar Sadasivan, Soheil Feizi
Robustness certificates based on the assumption of independent input samples are not directly applicable in such scenarios.
1 code implementation • 20 Mar 2023 • Shoumik Saha, Wenxiao Wang, Yigitcan Kaya, Soheil Feizi, Tudor Dumitras
After showing how DRSM is theoretically robust against attacks with contiguous adversarial bytes, we verify its performance and certified robustness experimentally, where we observe only marginal accuracy drops as the cost of robustness.
1 code implementation • 17 Mar 2023 • Vinu Sankar Sadasivan, Aounon Kumar, Sriram Balasubramanian, Wenxiao Wang, Soheil Feizi
In particular, we develop a recursive paraphrasing attack to apply on AI text, which can break a whole range of detectors, including the ones using the watermarking schemes as well as neural network-based detectors, zero-shot classifiers, and retrieval-based detectors.
1 code implementation • CVPR 2023 • Vinu Sankar Sadasivan, Mahdi Soltanolkotabi, Soheil Feizi
Here, ERM on the clean training data achieves a clean test accuracy of 80. 66$\%$.
no code implementations • NeurIPS 2023 • Wenxiao Wang, Soheil Feizi
Data poisoning considers cases when an adversary manipulates the behavior of machine learning algorithms through malicious training data.
2 code implementations • 5 Feb 2023 • Keivan Rezaei, Kiarash Banihashem, Atoosa Chegini, Soheil Feizi
Based on this approach, we propose DPA+ROE and FA+ROE defense methods based on Deep Partition Aggregation (DPA) and Finite Aggregation (FA) approaches from prior work.
1 code implementation • ICCV 2023 • Sriram Balasubramanian, Soheil Feizi
In this work, we propose a new masking method for CNNs we call layer masking in which the missingness bias caused by masking is reduced to a large extent.
no code implementations • 18 Nov 2022 • Priyatham Kattakinda, Alexander Levine, Soheil Feizi
Using the validation set, we evaluate several popular DNN image classifiers and find that the classification performance of models generally suffers on our background diverse images.
1 code implementation • 15 Nov 2022 • Sahil Singla, Soheil Feizi
In this work, we reduce this gap by introducing (a) a procedure to certify robustness of 1-Lipschitz CNNs by replacing the last linear layer with a 1-hidden layer MLP that significantly improves their performance for both standard and provably robust accuracy, (b) a method to significantly reduce the training time per epoch for Skew Orthogonal Convolution (SOC) layers (>30\% reduction for deeper networks) and (c) a class of pooling layers using the mathematical property that the $l_{2}$ distance of an input to a manifold is 1-Lipschitz.
no code implementations • 15 Sep 2022 • Mazda Moayeri, Kiarash Banihashem, Soheil Feizi
In this setting, through theoretical and empirical analysis, we show that (i) adversarial training with $\ell_1$ and $\ell_2$ norms increases the model reliance on spurious features; (ii) For $\ell_\infty$ adversarial training, spurious reliance only occurs when the scale of the spurious features is larger than that of the core features; (iii) adversarial training can have an unintended consequence in reducing distributional robustness, specifically when spurious correlations are changed in the new test domain.
1 code implementation • 28 Aug 2022 • Alexander Levine, Soheil Feizi
We empirically show that this can improve the performance of goal-conditioned off-policy reinforcement learning when the space of goals is high-dimensional.
1 code implementation • 5 Aug 2022 • Wenxiao Wang, Alexander Levine, Soheil Feizi
Deep Partition Aggregation (DPA) and its extension, Finite Aggregation (FA) are recent approaches for provable defenses against data poisoning, where they predict through the majority vote of many base models trained from different subsets of training set using a given learner.
no code implementations • 21 Jun 2022 • Yanchao Sun, Ruijie Zheng, Parisa Hassanzadeh, Yongyuan Liang, Soheil Feizi, Sumitra Ganesh, Furong Huang
Communication is important in many multi-agent reinforcement learning (MARL) problems for agents to share information and make good decisions.
no code implementations • 5 Jun 2022 • Aya Abdelsalam Ismail, Sercan Ö. Arik, Jinsung Yoon, Ankur Taly, Soheil Feizi, Tomas Pfister
In addition to constituting a standalone inherently-interpretable architecture, IME has the premise of being integrated with existing DNNs to offer interpretability to a subset of samples while maintaining the accuracy of the DNNs.
no code implementations • 28 Mar 2022 • Sahil Singla, Mazda Moayeri, Soheil Feizi
Deep neural networks can be unreliable in the real world especially when they heavily use spurious features for their predictions.
1 code implementation • 16 Mar 2022 • Alexander Levine, Soheil Feizi
Our approach builds on a recent work, Levine and Feizi (2021), which provides a provable defense against L_1 attacks.
no code implementations • 3 Mar 2022 • Neha Kalibhat, Kanika Narang, Hamed Firooz, Maziar Sanjabi, Soheil Feizi
Fine-tuning with Q-Score regularization can boost the linear probing accuracy of SSL models by up to 5. 8% on ImageNet-100 and 3. 7% on ImageNet-1K compared to their baselines.
1 code implementation • 5 Feb 2022 • Wenxiao Wang, Alexander Levine, Soheil Feizi
DPA predicts through an aggregation of base classifiers trained on disjoint subsets of data, thus restricting its sensitivity to dataset distortions.
1 code implementation • 28 Jan 2022 • Aounon Kumar, Alexander Levine, Tom Goldstein, Soheil Feizi
Certified robustness in machine learning has primarily focused on adversarial perturbations of the input with a fixed attack budget for each point in the data distribution.
1 code implementation • CVPR 2022 • Mazda Moayeri, Phillip Pope, Yogesh Balaji, Soheil Feizi
While datasets with single-label supervision have propelled rapid advances in image classification, additional annotations are necessary in order to quantitatively assess how models make predictions.
no code implementations • 12 Dec 2021 • Chun Pong Lau, Jiang Liu, Hossein Souri, Wei-An Lin, Soheil Feizi, Rama Chellappa
Under JSTM, we develop novel adversarial attacks and defenses.
no code implementations • 9 Dec 2021 • Jiang Liu, Chun Pong Lau, Hossein Souri, Soheil Feizi, Rama Chellappa
In other words, we can make a weak model more robust with the help of a strong teacher model.
1 code implementation • CVPR 2022 • Jiang Liu, Alexander Levine, Chun Pong Lau, Rama Chellappa, Soheil Feizi
In addition, we design a robust shape completion algorithm, which is guaranteed to remove the entire patch from the images if the outputs of the patch segmenter are within a certain Hamming distance of the ground-truth patch masks.
1 code implementation • NeurIPS 2021 • Aya Abdelsalam Ismail, Héctor Corrada Bravo, Soheil Feizi
In this paper, we tackle this issue and introduce a {\it saliency guided training}procedure for neural networks to reduce noisy gradients used in predictions while retaining the predictive performance of the model.
no code implementations • 21 Oct 2021 • Samyadeep Basu, Amr Sharaf, Nicolo Fusi, Soheil Feizi
To address the issue of sub-par performance on hard episodes, we investigate and benchmark different meta-training strategies based on adversarial training and curriculum learning.
2 code implementations • 8 Oct 2021 • Sahil Singla, Soheil Feizi
Our methodology is based on this key idea: to identify spurious or core \textit{visual features} used in model predictions, we identify spurious or core \textit{neural features} (penultimate layer neurons of a robust model) via limited human supervision (e. g., using top 5 activating images per feature).
1 code implementation • 7 Oct 2021 • Priyatham Kattakinda, Soheil Feizi
Standard training datasets for deep learning often contain objects in common settings (e. g., "a horse on grass" or "a ship in water") since they are usually collected by randomly scraping the web.
no code implementations • 29 Sep 2021 • Neha Mukund Kalibhat, Yogesh Balaji, C. Bayan Bruss, Soheil Feizi
In fact, training these methods on a combination of several domains often degrades the quality of learned representations compared to the models trained on a single domain.
no code implementations • ICLR 2022 • Sahil Singla, Soheil Feizi
Focusing on image classifications, we define causal attributes as the set of visual features that are always a part of the object while spurious attributes are the ones that are likely to {\it co-occur} with the object but not a part of it (e. g., attribute ``fingers" for class ``band aid").
no code implementations • ICCV 2021 • Mazda Moayeri, Soheil Feizi
In this paper, we propose a self-supervised method to detect adversarial attacks and classify them to their respective threat models, based on a linear model operating on the embeddings from a pre-trained self-supervised encoder.
1 code implementation • ICLR 2022 • Sahil Singla, Surbhi Singla, Soheil Feizi
While $1$-Lipschitz CNNs can be designed by enforcing a $1$-Lipschitz constraint on each layer, training such networks requires each layer to have an orthogonal Jacobian matrix (for all inputs) to prevent the gradients from vanishing during backpropagation.
no code implementations • ICLR 2022 • Aounon Kumar, Alexander Levine, Soheil Feizi
Prior works in provable robustness in RL seek to certify the behaviour of the victim policy at every time-step against a non-adaptive adversary using methods developed for the static setting.
1 code implementation • 24 May 2021 • Sahil Singla, Soheil Feizi
Then, we use the Taylor series expansion of the Jacobian exponential to construct the SOC layer that is orthogonal.
no code implementations • 12 Apr 2021 • Yogesh Balaji, Mohammadmahdi Sajedi, Neha Mukund Kalibhat, Mucong Ding, Dominik Stöger, Mahdi Soltanolkotabi, Soheil Feizi
We also empirically study the role of model overparameterization in GANs using several large-scale experiments on CIFAR-10 and Celeb-A datasets.
1 code implementation • 17 Mar 2021 • Alexander Levine, Soheil Feizi
To the best of our knowledge, this is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model while allowing for an arbitrary classifier (i. e., a deep model) to be used as a base classifier and without requiring an exponential number of smoothing samples.
1 code implementation • ICCV 2021 • Vasu Singla, Sahil Singla, David Jacobs, Soheil Feizi
In particular, we show that using activation functions with low (exact or approximate) curvature values has a regularization effect that significantly reduces both the standard and robust generalization gaps in adversarial training.
no code implementations • ICLR 2021 • Alexander Levine, Soheil Feizi
Against general poisoning attacks where no prior certified defenses exists, DPA can certify $\geq$ 50% of test images against over 500 poison image insertions on MNIST, and nine insertions on CIFAR-10.
no code implementations • ICLR 2021 • Yogesh Balaji, Mohammadmahdi Sajedi, Neha Mukund Kalibhat, Mucong Ding, Dominik Stöger, Mahdi Soltanolkotabi, Soheil Feizi
In this work, we present a comprehensive analysis of the importance of model over-parameterization in GANs both theoretically and empirically.
no code implementations • ICLR 2021 • Cassidy Laidlaw, Sahil Singla, Soheil Feizi
We call this threat model the neural perceptual threat model (NPTM); it includes adversarial examples with a bounded neural perceptual distance (a neural network-based approximation of the true perceptual distance) to natural images.
no code implementations • ICLR 2021 • Sahil Singla, Soheil Feizi
Through experiments on MNIST and CIFAR-10, we demonstrate the effectiveness of our spectral bound in improving generalization and robustness of deep networks.
1 code implementation • NeurIPS 2020 • Aya Abdelsalam Ismail, Mohamed Gunady, Héctor Corrada Bravo, Soheil Feizi
Saliency methods are used extensively to highlight the importance of input features in model predictions.
1 code implementation • 20 Oct 2020 • Alexander Levine, Aounon Kumar, Thomas Goldstein, Soheil Feizi
In this work, we show that there also exists a universal curvature-like bound for Gaussian random smoothing: given the exact value and gradient of a smoothed function, we compute a lower bound on the distance of a point to its closest adversarial example, called the Second-order Smoothing (SoS) robustness certificate.
2 code implementations • NeurIPS 2020 • Yogesh Balaji, Rama Chellappa, Soheil Feizi
To remedy this issue, robust formulations of OT with unbalanced marginal constraints have previously been proposed.
1 code implementation • 5 Oct 2020 • Neha Mukund Kalibhat, Yogesh Balaji, Soheil Feizi
In this paper, we confirm the existence of winning tickets in deep generative models such as GANs and VAEs.
no code implementations • 24 Sep 2020 • Pirazh Khorramshahi, Hossein Souri, Rama Chellappa, Soheil Feizi
To tackle this issue, we take an information-theoretic approach and maximize a variational lower bound on the entropy of the generated samples to increase their diversity.
no code implementations • NeurIPS 2020 • Aounon Kumar, Alexander Levine, Soheil Feizi, Tom Goldstein
It uses the probabilities of predicting the top two most-likely classes around an input point under a smoothing distribution to generate a certified radius for a classifier's prediction.
no code implementations • NeurIPS 2020 • Wei-An Lin, Chun Pong Lau, Alexander Levine, Rama Chellappa, Soheil Feizi
Using OM-ImageNet, we first show that adversarial training in the latent space of images improves both standard accuracy and robustness to on-manifold attacks.
no code implementations • 26 Jun 2020 • Alexander Levine, Soheil Feizi
Our defense against label-flipping attacks, SS-DPA, uses a semi-supervised learning algorithm as its base classifier model: each base classifier is trained using the entire unlabeled training set in addition to the labels for a partition.
no code implementations • ICLR 2021 • Samyadeep Basu, Philip Pope, Soheil Feizi
Influence functions approximate the effect of training samples in test-time predictions and have a wide variety of applications in machine learning interpretability and uncertainty estimation.
2 code implementations • 22 Jun 2020 • Cassidy Laidlaw, Sahil Singla, Soheil Feizi
We call this threat model the neural perceptual threat model (NPTM); it includes adversarial examples with a bounded neural perceptual distance (a neural network-based approximation of the true perceptual distance) to natural images.
1 code implementation • 17 Jun 2020 • Vedant Nanda, Samuel Dooley, Sahil Singla, Soheil Feizi, John P. Dickerson
In this paper, we argue that traditional notions of fairness that are only based on models' outputs are not sufficient when the model is vulnerable to adversarial attacks.
no code implementations • ICML 2020 • Sahil Singla, Soheil Feizi
Second, we derive a computationally-efficient differentiable upper bound on the curvature of a deep network.
1 code implementation • 24 Mar 2020 • Gowthami Somepalli, Yexin Wu, Yogesh Balaji, Bhanukiran Vinzamuri, Soheil Feizi
Detecting out of distribution (OOD) samples is of paramount importance in all Machine Learning applications.
Out of Distribution (OOD) Detection Representation Learning +1
no code implementations • 2 Mar 2020 • Mucong Ding, Constantinos Daskalakis, Soheil Feizi
GANs, however, are designed in a model-free fashion where no additional information about the underlying distribution is available.
1 code implementation • NeurIPS 2020 • Alexander Levine, Soheil Feizi
In this paper, we introduce a certifiable defense against patch attacks that guarantees for a given image and patch attack size, no patch adversarial examples exist.
1 code implementation • ICML 2020 • Aounon Kumar, Alexander Levine, Tom Goldstein, Soheil Feizi
Notably, for $p \geq 2$, this dependence on $d$ is no better than that of the $\ell_p$-radius that can be certified using isotropic Gaussian smoothing, essentially putting a matching lower bound on the robustness radius.
no code implementations • 25 Nov 2019 • Cassidy Laidlaw, Soheil Feizi
We explore adversarial robustness in the setting in which it is acceptable for a classifier to abstain---that is, output no class---on adversarial examples.
1 code implementation • 22 Nov 2019 • Sahil Singla, Soheil Feizi
Through experiments on MNIST and CIFAR-10, we demonstrate the effectiveness of our spectral bound in improving generalization and provable robustness of deep networks.
1 code implementation • 21 Nov 2019 • Alexander Levine, Soheil Feizi
This is comparable to the observed empirical robustness of unprotected classifiers on MNIST to modern L_0 attacks, demonstrating the tightness of the proposed robustness certificate.
no code implementations • 20 Nov 2019 • Phillip Pope, Yogesh Balaji, Soheil Feizi
Finally, using a hybrid adversarial training procedure, we significantly boost the robustness of these generative models.
no code implementations • ICML 2020 • Samyadeep Basu, Xuchen You, Soheil Feizi
Often we want to identify an influential group of training samples in a particular test prediction for a given machine learning model.
1 code implementation • NeurIPS 2019 • Shouvanik Chakrabarti, Yiming Huang, Tongyang Li, Soheil Feizi, Xiaodi Wu
The study of quantum generative models is well-motivated, not only because of its importance in quantum machine learning and quantum chemistry but also because of the perspective of its implementation on near-term quantum machines.
no code implementations • 23 Oct 2019 • Alexander Levine, Soheil Feizi
An example of an attack method based on a non-additive threat model is the Wasserstein adversarial attack proposed by Wong et al. (2019), where the distance between an image and its adversarial example is determined by the Wasserstein metric ("earth-mover distance") between their normalized pixel intensities.
1 code implementation • 29 Sep 2019 • Neehar Peri, Neal Gupta, W. Ronny Huang, Liam Fowl, Chen Zhu, Soheil Feizi, Tom Goldstein, John P. Dickerson
Targeted clean-label data poisoning is a type of adversarial attack on machine learning systems in which an adversary injects a few correctly-labeled, minimally-perturbed samples into the training data, causing a model to misclassify a particular test sample during inference.
no code implementations • 25 Sep 2019 • Sahil Singla, Soheil Feizi
We also use the curvature bound as a regularization term during the training of the network to boost its certified robustness against adversarial examples.
no code implementations • 30 May 2019 • Samuel Barham, Soheil Feizi
SPGD imposes a directional regularization constraint on input perturbations by projecting them onto the directions to nearby word embeddings with highest cosine similarities.
1 code implementation • NeurIPS 2019 • Cassidy Laidlaw, Soheil Feizi
For simplicity, we refer to functional adversarial attacks on image colors as ReColorAdv, which is the main focus of our experiments.
no code implementations • 28 May 2019 • Alexander Levine, Sahil Singla, Soheil Feizi
Deep learning interpretation is essential to explain the reasoning behind model predictions.
2 code implementations • 23 May 2019 • Micah Goldblum, Liam Fowl, Soheil Feizi, Tom Goldstein
In addition to producing small models with high test accuracy like conventional distillation, ARD also passes the superior robustness of large networks onto the student.
1 code implementation • 1 Feb 2019 • Yogesh Balaji, Rama Chellappa, Soheil Feizi
Using the proposed normalized Wasserstein measure leads to significant performance gains for mixture distributions with imbalanced mixture proportions compared to the vanilla Wasserstein distance.
no code implementations • 1 Feb 2019 • Angeline Aguinaldo, Ping-Yeh Chiang, Alex Gain, Ameya Patil, Kolten Pearson, Soheil Feizi
From our experiments, we observe a qualitative limit for GAN's compression.
no code implementations • 1 Feb 2019 • Sahil Singla, Soheil Feizi
These robustness certificates leverage the piece-wise linear structure of ReLU networks and use the fact that in a polyhedron around a given sample, the prediction function is linear.
1 code implementation • 1 Feb 2019 • Sahil Singla, Eric Wallace, Shi Feng, Soheil Feizi
Second, we compute the importance of group-features in deep learning interpretation by introducing a sparsity regularization term.
no code implementations • NeurIPS 2018 • Soheil Feizi, Hamid Javadi, Jesse Zhang, David Tse
Neural networks have been used prominently in several machine learning and statistics applications.
1 code implementation • ICLR 2019 • Yogesh Balaji, Hamed Hassani, Rama Chellappa, Soheil Feizi
Building on the success of deep learning, two modern approaches to learn a probability model from the data are Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs).
no code implementations • ICLR 2019 • Ali Shafahi, W. Ronny Huang, Christoph Studer, Soheil Feizi, Tom Goldstein
Using experiments, we explore the implications of theoretical guarantees for real-world problems and discuss how factors such as dimensionality and image complexity limit a classifier's robustness against adversarial examples.
1 code implementation • NeurIPS 2017 • Soheil Feizi, Hamid Javadi, David Tse
Consider a dataset where data is collected on multiple features of multiple individuals over multiple times.
no code implementations • ICLR 2018 • Soheil Feizi, Farzan Farnia, Tony Ginart, David Tse
Generative Adversarial Networks (GANs) have become a popular method to learn a probability model from data.
1 code implementation • 5 Oct 2017 • Soheil Feizi, Hamid Javadi, Jesse Zhang, David Tse
Neural networks have been used prominently in several machine learning and statistics applications.
no code implementations • 17 Feb 2017 • Soheil Feizi, David Tse
For jointly Gaussian variables we show that the covariance matrix corresponding to the identity (or the negative of the identity) transformations majorizes covariance matrices of non-identity functions.
no code implementations • 15 Jun 2016 • Soheil Feizi, Ali Makhdoumi, Ken Duffy, Muriel Medard, Manolis Kellis
For jointly Gaussian variables, we show that under some conditions the NMC optimization is an instance of the Max-Cut problem.
no code implementations • 3 Oct 2015 • Luke O'Connor, Muriel Médard, Soheil Feizi
A latent space model of particular interest is the Random Dot Product Graph (RDPG), which can be fit using an efficient spectral method; however, this method is based on a heuristic that can fail, even in simple cases.
no code implementations • NeurIPS 2014 • Luke O'Connor, Soheil Feizi
Biclustering is the analog of clustering on a bipartite graph.