Search Results for author: Amit Deshpande

Found 35 papers, 9 papers with code

Robustness and Equivariance of Neural Networks

no code implementations ICLR 2019 Amit Deshpande, Sandesh Kamath, K V Subrahmanyam

Neural networks models are known to be vulnerable to geometric transformations as well as small pixel-wise perturbations of input.

Translation

NICE: To Optimize In-Context Examples or Not?

no code implementations9 Feb 2024 Pragya Srivastava, Satvik Golechha, Amit Deshpande, Amit Sharma

Recent work shows that in-context learning and optimization of in-context examples (ICE) can significantly improve the accuracy of large language models (LLMs) on a wide range of tasks, leading to an apparent consensus that ICE optimization is crucial for better performance.

In-Context Learning

Rethinking Robustness of Model Attributions

1 code implementation16 Dec 2023 Sandesh Kamath, Sankalp Mittal, Amit Deshpande, Vineeth N Balasubramanian

We observe two main causes for fragile attributions: first, the existing metrics of robustness (e. g., top-k intersection) over-penalize even reasonable local shifts in attribution, thereby making random perturbations to appear as a strong attack, and second, the attribution can be concentrated in a small region even when there are multiple important parts in an image.

How Far Can Fairness Constraints Help Recover From Biased Data?

no code implementations16 Dec 2023 Mohit Sharma, Amit Deshpande

We further generalize it to arbitrary data distributions and arbitrary hypothesis classes, i. e., we prove that for any data distribution, if the optimally accurate classifier in a given hypothesis class is fair and robust, then it can be recovered through fair classification with equal opportunity constraints on the biased distribution whenever the bias parameters satisfy certain simple conditions.

Fairness

Improved Outlier Robust Seeding for k-means

no code implementations6 Sep 2023 Amit Deshpande, Rameshwar Pratap

However, in the presence of adversarial noise or outliers, $D^{2}$ sampling is more likely to pick centers from distant outliers instead of inlier clusters, and therefore its approximation guarantees \textit{w. r. t.}

Optimizing Group-Fair Plackett-Luce Ranking Models for Relevance and Ex-Post Fairness

1 code implementation25 Aug 2023 Sruthi Gorantla, Eshaan Bhansali, Amit Deshpande, Anand Louis

Previous works have proposed efficient algorithms to train stochastic ranking models that achieve fairness of exposure to the groups ex-ante (or, in expectation), which may not guarantee representation fairness to the groups ex-post, that is, after realizing a ranking from the stochastic ranking model.

Fairness Learning-To-Rank

Sampling Individually-Fair Rankings that are Always Group Fair

no code implementations21 Jun 2023 Sruthi Gorantla, Anay Mehrotra, Amit Deshpande, Anand Louis

Fair ranking tasks, which ask to rank a set of items to maximize utility subject to satisfying group-fairness constraints, have gained significant interest in the Algorithmic Fairness, Information Retrieval, and Machine Learning literature.

Fairness Information Retrieval +2

Causal Effect Regularization: Automated Detection and Removal of Spurious Attributes

no code implementations19 Jun 2023 Abhinav Kumar, Amit Deshpande, Amit Sharma

We prove that our method only requires that the ranking of estimated causal effects is correct across attributes to select the correct classifier.

Attribute

On Comparing Fair Classifiers under Data Bias

1 code implementation12 Feb 2023 Mohit Sharma, Amit Deshpande, Rajiv Ratn Shah

In this paper, we consider a theoretical model for injecting data bias, namely, under-representation and label bias (Blum & Stangl, 2019).

Fairness Marketing

Socially Fair Center-based and Linear Subspace Clustering

no code implementations22 Aug 2022 Sruthi Gorantla, Kishen N. Gowda, Amit Deshpande, Anand Louis

Center-based clustering (e. g., $k$-means, $k$-medians) and clustering using linear subspaces are two most popular techniques to partition real-world data into smaller clusters.

Clustering Fairness

One-pass additive-error subset selection for $\ell_{p}$ subspace approximation

no code implementations26 Apr 2022 Amit Deshpande, Rameshwar Pratap

In this paper, we give a one-pass subset selection with an additive approximation guarantee for $\ell_{p}$ subspace approximation, for any $p \in [1, \infty)$.

Sampling Ex-Post Group-Fair Rankings

2 code implementations2 Mar 2022 Sruthi Gorantla, Amit Deshpande, Anand Louis

Our second random walk-based algorithm samples ex-post group-fair rankings from a distribution $\delta$-close to $D$ in total variation distance and has expected running time $O^*(k^2\ell^2)$, when there is a sufficient gap between the given upper and lower bounds on the group-wise representation.

Fairness

Learning and Generalization in Overparameterized Normalizing Flows

1 code implementation19 Jun 2021 Kulin Shah, Amit Deshpande, Navin Goyal

In supervised learning, it is known that overparameterized neural networks with one hidden layer provably and efficiently learn and generalize, when trained using stochastic gradient descent with a sufficiently small learning rate and suitable initialization.

Density Estimation

Rawlsian Fair Adaptation of Deep Learning Classifiers

no code implementations31 May 2021 Kulin Shah, Pooja Gupta, Amit Deshpande, Chiranjib Bhattacharyya

Given any score function or feature representation and only its second-order statistics on the sensitive sub-populations, we seek a threshold classifier on the given score or a linear threshold classifier on the given feature representation that achieves the Rawls error rate restricted to this hypothesis class.

Fairness

On Subspace Approximation and Subset Selection in Fewer Passes by MCMC Sampling

no code implementations20 Mar 2021 Amit Deshpande, Rameshwar Pratap

Our ideas also extend to give a reduction in the number of passes required by adaptive sampling algorithms for $\ell_{p}$ subspace approximation and subset selection, for $p \geq 2$.

Learning and Generalization in Univariate Overparameterized Normalizing Flows

no code implementations1 Jan 2021 Kulin Shah, Amit Deshpande, Navin Goyal

In supervised learning, it is known that overparameterized neural networks with one hidden layer provably and efficiently learn and generalize, when trained using Stochastic Gradient Descent (SGD).

Density Estimation

The Importance of Modeling Data Missingness in Algorithmic Fairness: A Causal Perspective

no code implementations21 Dec 2020 Naman Goel, Alfonso Amayuelas, Amit Deshpande, Amit Sharma

For example, in multi-stage settings where decisions are made in multiple screening rounds, we use our framework to derive the minimal distributions required to design a fair algorithm.

Decision Making Fairness

On the Problem of Underranking in Group-Fair Ranking

2 code implementations24 Sep 2020 Sruthi Gorantla, Amit Deshpande, Anand Louis

We give a fair ranking algorithm that takes any given ranking and outputs another ranking with simultaneous underranking and group fairness guarantees comparable to the lower bound we prove.

Fairness Learning-To-Rank +1

Subspace approximation with outliers

no code implementations30 Jun 2020 Amit Deshpande, Rameshwar Pratap

Any multiplicative approximation algorithm for the subspace approximation problem with outliers must solve the robust subspace recovery problem, a special case in which the $(1-\alpha)n$ inliers in the optimal solution are promised to lie exactly on a $k$-dimensional linear subspace.

Dimensionality Reduction

How do SGD hyperparameters in natural training affect adversarial robustness?

no code implementations20 Jun 2020 Sandesh Kamath, Amit Deshpande, K V Subrahmanyam

We observe that networks trained with constant learning rate to batch size ratio, as proposed in Jastrzebski et al., yield models which generalize well and also have almost constant adversarial robustness, independent of the batch size.

Adversarial Robustness

On Universalized Adversarial and Invariant Perturbations

no code implementations8 Jun 2020 Sandesh Kamath, Amit Deshpande, K V Subrahmanyam

Recent work by authors arXiv:2002. 11318 studies a trade-off between invariance and robustness to adversarial attacks.

Translation

Universalization of any adversarial attack using very few test examples

1 code implementation18 May 2020 Sandesh Kamath, Amit Deshpande, K V Subrahmanyam, Vineeth N Balasubramanian

For VGG16 and VGG19 models trained on ImageNet, our simple universalization of Gradient, FGSM, and DeepFool perturbations using a test sample of 64 images gives fooling rates comparable to state-of-the-art universal attacks \cite{Dezfooli17, Khrulkov18} for reasonable norms of perturbation.

Adversarial Attack

Can we have it all? On the Trade-off between Spatial and Adversarial Robustness of Neural Networks

1 code implementation NeurIPS 2021 Sandesh Kamath, Amit Deshpande, K V Subrahmanyam, Vineeth N Balasubramanian

(Non-)robustness of neural networks to small, adversarial pixel-wise perturbations, and as more recently shown, to even random spatial transformations (e. g., translations, rotations) entreats both theoretical and empirical understanding.

Adversarial Robustness

Invariance vs Robustness of Neural Networks

no code implementations25 Sep 2019 Sandesh Kamath, Amit Deshpande, K V Subrahmanyam

We observe that the rotation invariance of equivariant models (StdCNNs and GCNNs) improves by training augmentation with progressively larger rotations but while doing so, their adversarial robustness does not improve, or worse, it can even drop significantly on datasets such as MNIST.

Adversarial Robustness Image Classification

Universal Adversarial Attack Using Very Few Test Examples

no code implementations25 Sep 2019 Amit Deshpande, Sandesh Kamath, K V Subrahmanyam

We evaluate the error rates and fooling rates of three universal attacks, SVD-Gradient, SVD-DeepFool and SVD-FGSM, on state of the art neural networks.

Adversarial Attack

Quantifying Infra-Marginality and Its Trade-off with Group Fairness

no code implementations3 Sep 2019 Arpita Biswas, Siddharth Barman, Amit Deshpande, Amit Sharma

To quantify this bias, we propose a general notion of $\eta$-infra-marginality that can be used to evaluate the extent of this bias.

Decision Making Fairness

Universal Attacks on Equivariant Networks

no code implementations27 Sep 2018 Amit Deshpande, Sandesh Kamath, K V Subrahmanyam

In this paper, we observe an interesting spectral property shared by all of the above input-dependent, pixel-wise adversarial attacks on translation and rotation-equivariant networks.

Adversarial Attack Translation

On Euclidean $k$-Means Clustering with $α$-Center Proximity

no code implementations28 Apr 2018 Amit Deshpande, Anand Louis, Apoorv Vikram Singh

On the hardness side we show that for any $\alpha' > 1$, there exists an $\alpha \leq \alpha'$, $(\alpha >1)$, and an $\varepsilon_0 > 0$ such that minimizing the $k$-means objective over clusterings that satisfy $\alpha$-center proximity is NP-hard to approximate within a multiplicative $(1+\varepsilon_0)$ factor.

Clustering

Fair and Diverse DPP-based Data Summarization

1 code implementation ICML 2018 L. Elisa Celis, Vijay Keswani, Damian Straszak, Amit Deshpande, Tarun Kathuria, Nisheeth K. Vishnoi

Sampling methods that choose a subset of the data proportional to its diversity in the feature space are popular for data summarization.

Data Summarization Fairness

Depth separation and weight-width trade-offs for sigmoidal neural networks

no code implementations ICLR 2018 Amit Deshpande, Navin Goyal, Sushrut Karmalkar

We show a similar separation between the expressive power of depth-2 and depth-3 sigmoidal neural networks over a large class of input distributions, as long as the weights are polynomially bounded.

How to be Fair and Diverse?

no code implementations23 Oct 2016 L. Elisa Celis, Amit Deshpande, Tarun Kathuria, Nisheeth K. Vishnoi

However, in doing so, a question that seems to be overlooked is whether it is possible to produce fair subsamples that are also adequately representative of the feature space of the data set - an important and classic requirement in machine learning.

BIG-bench Machine Learning Data Summarization +2

On the Complexity of Constrained Determinantal Point Processes

no code implementations1 Aug 2016 L. Elisa Celis, Amit Deshpande, Tarun Kathuria, Damian Straszak, Nisheeth K. Vishnoi

Consequently, we obtain a few algorithms of independent interest: 1) to count over the base polytope of regular matroids when there are additional (succinct) budget constraints and, 2) to evaluate and compute the mixed characteristic polynomials, that played a central role in the resolution of the Kadison-Singer problem, for certain special cases.

Fairness Point Processes

On Sampling and Greedy MAP Inference of Constrained Determinantal Point Processes

no code implementations6 Jul 2016 Tarun Kathuria, Amit Deshpande

When pairwise similarities are captured by a kernel, the determinants of submatrices provide a measure of diversity or independence of items within a subset.

Clustering Point Processes

Cannot find the paper you are looking for? You can Submit a new open access paper.