no code implementations • 20 Dec 2023 • Sahil Singla, Yifan Wang
To overcome this assumption, we study sequential posted pricing in the bandit learning model, where the seller interacts with $n$ buyers over $T$ rounds: In each round the seller posts $n$ prices for the $n$ buyers and the first buyer with a valuation higher than the price takes the item.
no code implementations • 17 Nov 2022 • Sahil Singla, Atoosa Malemir Chegini, Mazda Moayeri, Soheil Feiz
Our Data-Centric Debugging (DCD) framework carefully creates a debug-train set by selecting images from $\mathcal{F}$ that are perceptually similar to the images in $\mathcal{E}_{sample}$.
no code implementations • 16 Nov 2022 • Khashayar Gatmiry, Thomas Kesselheim, Sahil Singla, Yifan Wang
The goal is to minimize the regret, which is the difference over $T$ rounds in the total value of the optimal algorithm that knows the distributions vs. the total value of our algorithm that learns the distributions from the partial feedback.
1 code implementation • 15 Nov 2022 • Sahil Singla, Soheil Feizi
In this work, we reduce this gap by introducing (a) a procedure to certify robustness of 1-Lipschitz CNNs by replacing the last linear layer with a 1-hidden layer MLP that significantly improves their performance for both standard and provably robust accuracy, (b) a method to significantly reduce the training time per epoch for Skew Orthogonal Convolution (SOC) layers (>30\% reduction for deeper networks) and (c) a class of pooling layers using the mathematical property that the $l_{2}$ distance of an input to a manifold is 1-Lipschitz.
no code implementations • 28 Mar 2022 • Sahil Singla, Mazda Moayeri, Soheil Feizi
Deep neural networks can be unreliable in the real world especially when they heavily use spurious features for their predictions.
2 code implementations • 8 Oct 2021 • Sahil Singla, Soheil Feizi
Our methodology is based on this key idea: to identify spurious or core \textit{visual features} used in model predictions, we identify spurious or core \textit{neural features} (penultimate layer neurons of a robust model) via limited human supervision (e. g., using top 5 activating images per feature).
no code implementations • ICLR 2022 • Sahil Singla, Soheil Feizi
Focusing on image classifications, we define causal attributes as the set of visual features that are always a part of the object while spurious attributes are the ones that are likely to {\it co-occur} with the object but not a part of it (e. g., attribute ``fingers" for class ``band aid").
1 code implementation • ICLR 2022 • Sahil Singla, Surbhi Singla, Soheil Feizi
While $1$-Lipschitz CNNs can be designed by enforcing a $1$-Lipschitz constraint on each layer, training such networks requires each layer to have an orthogonal Jacobian matrix (for all inputs) to prevent the gradients from vanishing during backpropagation.
1 code implementation • 24 May 2021 • Sahil Singla, Soheil Feizi
Then, we use the Taylor series expansion of the Jacobian exponential to construct the SOC layer that is orthogonal.
1 code implementation • ICCV 2021 • Vasu Singla, Sahil Singla, David Jacobs, Soheil Feizi
In particular, we show that using activation functions with low (exact or approximate) curvature values has a regularization effect that significantly reduces both the standard and robust generalization gaps in adversarial training.
no code implementations • ICLR 2021 • Cassidy Laidlaw, Sahil Singla, Soheil Feizi
We call this threat model the neural perceptual threat model (NPTM); it includes adversarial examples with a bounded neural perceptual distance (a neural network-based approximation of the true perceptual distance) to natural images.
no code implementations • ICLR 2021 • Sahil Singla, Soheil Feizi
Through experiments on MNIST and CIFAR-10, we demonstrate the effectiveness of our spectral bound in improving generalization and robustness of deep networks.
1 code implementation • CVPR 2021 • Sahil Singla, Besmira Nushi, Shital Shah, Ece Kamar, Eric Horvitz
Traditional evaluation metrics for learned models that report aggregate scores over a test set are insufficient for surfacing important and informative patterns of failure over features and instances.
no code implementations • 14 Oct 2020 • Thomas Kesselheim, Sahil Singla
We study \OLVCp in both stochastic and adversarial arrival settings, and give a general procedure to reduce the problem from $d$ dimensions to a single dimension.
2 code implementations • 22 Jun 2020 • Cassidy Laidlaw, Sahil Singla, Soheil Feizi
We call this threat model the neural perceptual threat model (NPTM); it includes adversarial examples with a bounded neural perceptual distance (a neural network-based approximation of the true perceptual distance) to natural images.
1 code implementation • 17 Jun 2020 • Vedant Nanda, Samuel Dooley, Sahil Singla, Soheil Feizi, John P. Dickerson
In this paper, we argue that traditional notions of fairness that are only based on models' outputs are not sufficient when the model is vulnerable to adversarial attacks.
no code implementations • ICML 2020 • Sahil Singla, Soheil Feizi
Second, we derive a computationally-efficient differentiable upper bound on the curvature of a deep network.
1 code implementation • 22 Nov 2019 • Sahil Singla, Soheil Feizi
Through experiments on MNIST and CIFAR-10, we demonstrate the effectiveness of our spectral bound in improving generalization and provable robustness of deep networks.
no code implementations • 25 Sep 2019 • Sahil Singla, Soheil Feizi
We also use the curvature bound as a regularization term during the training of the network to boost its certified robustness against adversarial examples.
no code implementations • 28 May 2019 • Alexander Levine, Sahil Singla, Soheil Feizi
Deep learning interpretation is essential to explain the reasoning behind model predictions.
1 code implementation • 1 Feb 2019 • Sahil Singla, Eric Wallace, Shi Feng, Soheil Feizi
Second, we compute the importance of group-features in deep learning interpretation by introducing a sparsity regularization term.
no code implementations • 1 Feb 2019 • Sahil Singla, Soheil Feizi
These robustness certificates leverage the piece-wise linear structure of ReLU networks and use the fact that in a polyhedron around a given sample, the prediction function is linear.