Search Results for author: Sahil Singla

Found 22 papers, 10 papers with code

Bandit Sequential Posted Pricing via Half-Concavity

no code implementations20 Dec 2023 Sahil Singla, Yifan Wang

To overcome this assumption, we study sequential posted pricing in the bandit learning model, where the seller interacts with $n$ buyers over $T$ rounds: In each round the seller posts $n$ prices for the $n$ buyers and the first buyer with a valuation higher than the price takes the item.

Data-Centric Debugging: mitigating model failures via targeted data collection

no code implementations17 Nov 2022 Sahil Singla, Atoosa Malemir Chegini, Mazda Moayeri, Soheil Feiz

Our Data-Centric Debugging (DCD) framework carefully creates a debug-train set by selecting images from $\mathcal{F}$ that are perceptually similar to the images in $\mathcal{E}_{sample}$.

Image Classification

Bandit Algorithms for Prophet Inequality and Pandora's Box

no code implementations16 Nov 2022 Khashayar Gatmiry, Thomas Kesselheim, Sahil Singla, Yifan Wang

The goal is to minimize the regret, which is the difference over $T$ rounds in the total value of the optimal algorithm that knows the distributions vs. the total value of our algorithm that learns the distributions from the partial feedback.

Multi-Armed Bandits Stochastic Optimization

Improved techniques for deterministic l2 robustness

1 code implementation15 Nov 2022 Sahil Singla, Soheil Feizi

In this work, we reduce this gap by introducing (a) a procedure to certify robustness of 1-Lipschitz CNNs by replacing the last linear layer with a 1-hidden layer MLP that significantly improves their performance for both standard and provably robust accuracy, (b) a method to significantly reduce the training time per epoch for Skew Orthogonal Convolution (SOC) layers (>30\% reduction for deeper networks) and (c) a class of pooling layers using the mathematical property that the $l_{2}$ distance of an input to a manifold is 1-Lipschitz.

Adversarial Robustness

Core Risk Minimization using Salient ImageNet

no code implementations28 Mar 2022 Sahil Singla, Mazda Moayeri, Soheil Feizi

Deep neural networks can be unreliable in the real world especially when they heavily use spurious features for their predictions.

Salient ImageNet: How to discover spurious features in Deep Learning?

2 code implementations8 Oct 2021 Sahil Singla, Soheil Feizi

Our methodology is based on this key idea: to identify spurious or core \textit{visual features} used in model predictions, we identify spurious or core \textit{neural features} (penultimate layer neurons of a robust model) via limited human supervision (e. g., using top 5 activating images per feature).

Attribute

Causal ImageNet: How to discover spurious features in Deep Learning?

no code implementations ICLR 2022 Sahil Singla, Soheil Feizi

Focusing on image classifications, we define causal attributes as the set of visual features that are always a part of the object while spurious attributes are the ones that are likely to {\it co-occur} with the object but not a part of it (e. g., attribute ``fingers" for class ``band aid").

Attribute

Improved deterministic l2 robustness on CIFAR-10 and CIFAR-100

1 code implementation ICLR 2022 Sahil Singla, Surbhi Singla, Soheil Feizi

While $1$-Lipschitz CNNs can be designed by enforcing a $1$-Lipschitz constraint on each layer, training such networks requires each layer to have an orthogonal Jacobian matrix (for all inputs) to prevent the gradients from vanishing during backpropagation.

Adversarial Robustness

Skew Orthogonal Convolutions

1 code implementation24 May 2021 Sahil Singla, Soheil Feizi

Then, we use the Taylor series expansion of the Jacobian exponential to construct the SOC layer that is orthogonal.

Adversarial Robustness

Low Curvature Activations Reduce Overfitting in Adversarial Training

1 code implementation ICCV 2021 Vasu Singla, Sahil Singla, David Jacobs, Soheil Feizi

In particular, we show that using activation functions with low (exact or approximate) curvature values has a regularization effect that significantly reduces both the standard and robust generalization gaps in adversarial training.

Perceptual Adversarial Robustness: Generalizable Defenses Against Unforeseen Threat Models

no code implementations ICLR 2021 Cassidy Laidlaw, Sahil Singla, Soheil Feizi

We call this threat model the neural perceptual threat model (NPTM); it includes adversarial examples with a bounded neural perceptual distance (a neural network-based approximation of the true perceptual distance) to natural images.

Adversarial Defense Adversarial Robustness +1

Fantastic Four: Differentiable and Efficient Bounds on Singular Values of Convolution Layers

no code implementations ICLR 2021 Sahil Singla, Soheil Feizi

Through experiments on MNIST and CIFAR-10, we demonstrate the effectiveness of our spectral bound in improving generalization and robustness of deep networks.

Understanding Failures of Deep Networks via Robust Feature Extraction

1 code implementation CVPR 2021 Sahil Singla, Besmira Nushi, Shital Shah, Ece Kamar, Eric Horvitz

Traditional evaluation metrics for learned models that report aggregate scores over a test set are insufficient for surfacing important and informative patterns of failure over features and instances.

Online Learning with Vector Costs and Bandits with Knapsacks

no code implementations14 Oct 2020 Thomas Kesselheim, Sahil Singla

We study \OLVCp in both stochastic and adversarial arrival settings, and give a general procedure to reduce the problem from $d$ dimensions to a single dimension.

Scheduling

Perceptual Adversarial Robustness: Defense Against Unseen Threat Models

2 code implementations22 Jun 2020 Cassidy Laidlaw, Sahil Singla, Soheil Feizi

We call this threat model the neural perceptual threat model (NPTM); it includes adversarial examples with a bounded neural perceptual distance (a neural network-based approximation of the true perceptual distance) to natural images.

Adversarial Defense Adversarial Robustness +1

Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning

1 code implementation17 Jun 2020 Vedant Nanda, Samuel Dooley, Sahil Singla, Soheil Feizi, John P. Dickerson

In this paper, we argue that traditional notions of fairness that are only based on models' outputs are not sufficient when the model is vulnerable to adversarial attacks.

Decision Making Face Recognition +1

Second-Order Provable Defenses against Adversarial Attacks

no code implementations ICML 2020 Sahil Singla, Soheil Feizi

Second, we derive a computationally-efficient differentiable upper bound on the curvature of a deep network.

Fantastic Four: Differentiable Bounds on Singular Values of Convolution Layers

1 code implementation22 Nov 2019 Sahil Singla, Soheil Feizi

Through experiments on MNIST and CIFAR-10, we demonstrate the effectiveness of our spectral bound in improving generalization and provable robustness of deep networks.

Curvature-based Robustness Certificates against Adversarial Examples

no code implementations25 Sep 2019 Sahil Singla, Soheil Feizi

We also use the curvature bound as a regularization term during the training of the network to boost its certified robustness against adversarial examples.

Certifiably Robust Interpretation in Deep Learning

no code implementations28 May 2019 Alexander Levine, Sahil Singla, Soheil Feizi

Deep learning interpretation is essential to explain the reasoning behind model predictions.

Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation

1 code implementation1 Feb 2019 Sahil Singla, Eric Wallace, Shi Feng, Soheil Feizi

Second, we compute the importance of group-features in deep learning interpretation by introducing a sparsity regularization term.

Feature Importance General Classification

Robustness Certificates Against Adversarial Examples for ReLU Networks

no code implementations1 Feb 2019 Sahil Singla, Soheil Feizi

These robustness certificates leverage the piece-wise linear structure of ReLU networks and use the fact that in a polyhedron around a given sample, the prediction function is linear.

General Classification Multi-Label Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.