no code implementations • ICLR 2019 • Amit Deshpande, Sandesh Kamath, K V Subrahmanyam
Neural networks models are known to be vulnerable to geometric transformations as well as small pixel-wise perturbations of input.
no code implementations • 5 Jun 2024 • Sushant Agarwal, Amit Deshpande
Extending our ideas for randomized fair classification, we improve on these works, and construct DP-fair, EO-fair, and PE-fair representations that have provably optimal accuracy and suffer no accuracy loss compared to the optimal DP-fair, EO-fair, and PE-fair classifiers respectively on the original data distribution.
1 code implementation • 9 Feb 2024 • Pragya Srivastava, Satvik Golechha, Amit Deshpande, Amit Sharma
Recent work shows that in-context learning and optimization of in-context examples (ICE) can significantly improve the accuracy of large language models (LLMs) on a wide range of tasks, leading to an apparent consensus that ICE optimization is crucial for better performance.
1 code implementation • 16 Dec 2023 • Sandesh Kamath, Sankalp Mittal, Amit Deshpande, Vineeth N Balasubramanian
We observe two main causes for fragile attributions: first, the existing metrics of robustness (e. g., top-k intersection) over-penalize even reasonable local shifts in attribution, thereby making random perturbations to appear as a strong attack, and second, the attribution can be concentrated in a small region even when there are multiple important parts in an image.
no code implementations • 16 Dec 2023 • Mohit Sharma, Amit Deshpande
We further generalize it to arbitrary data distributions and arbitrary hypothesis classes, i. e., we prove that for any data distribution, if the optimally accurate classifier in a given hypothesis class is fair and robust, then it can be recovered through fair classification with equal opportunity constraints on the biased distribution whenever the bias parameters satisfy certain simple conditions.
no code implementations • 6 Sep 2023 • Amit Deshpande, Rameshwar Pratap
However, in the presence of adversarial noise or outliers, $D^{2}$ sampling is more likely to pick centers from distant outliers instead of inlier clusters, and therefore its approximation guarantees \textit{w. r. t.}
1 code implementation • 25 Aug 2023 • Sruthi Gorantla, Eshaan Bhansali, Amit Deshpande, Anand Louis
Previous works have proposed efficient algorithms to train stochastic ranking models that achieve fairness of exposure to the groups ex-ante (or, in expectation), which may not guarantee representation fairness to the groups ex-post, that is, after realizing a ranking from the stochastic ranking model.
no code implementations • 21 Jun 2023 • Sruthi Gorantla, Anay Mehrotra, Amit Deshpande, Anand Louis
Fair ranking tasks, which ask to rank a set of items to maximize utility subject to satisfying group-fairness constraints, have gained significant interest in the Algorithmic Fairness, Information Retrieval, and Machine Learning literature.
no code implementations • 19 Jun 2023 • Abhinav Kumar, Amit Deshpande, Amit Sharma
We prove that our method only requires that the ranking of estimated causal effects is correct across attributes to select the correct classifier.
1 code implementation • 12 Feb 2023 • Mohit Sharma, Amit Deshpande, Rajiv Ratn Shah
In this paper, we consider a theoretical model for injecting data bias, namely, under-representation and label bias (Blum & Stangl, 2019).
no code implementations • 11 Nov 2022 • Rasoul Shahsavarifar, Jithu Chandran, Mario Inchiosa, Amit Deshpande, Mario Schlener, Vishal Gossain, Yara Elias, Vinaya Murali
Furthermore, we developed a second metric (distinct from the fair similarity metric) to determine how fairly a model is treating similar individuals.
no code implementations • 22 Aug 2022 • Sruthi Gorantla, Kishen N. Gowda, Amit Deshpande, Anand Louis
Center-based clustering (e. g., $k$-means, $k$-medians) and clustering using linear subspaces are two most popular techniques to partition real-world data into smaller clusters.
no code implementations • 26 Apr 2022 • Amit Deshpande, Rameshwar Pratap
In this paper, we give a one-pass subset selection with an additive approximation guarantee for $\ell_{p}$ subspace approximation, for any $p \in [1, \infty)$.
2 code implementations • 2 Mar 2022 • Sruthi Gorantla, Amit Deshpande, Anand Louis
Our second random walk-based algorithm samples ex-post group-fair rankings from a distribution $\delta$-close to $D$ in total variation distance and has expected running time $O^*(k^2\ell^2)$, when there is a sufficient gap between the given upper and lower bounds on the group-wise representation.
1 code implementation • 19 Jun 2021 • Kulin Shah, Amit Deshpande, Navin Goyal
In supervised learning, it is known that overparameterized neural networks with one hidden layer provably and efficiently learn and generalize, when trained using stochastic gradient descent with a sufficiently small learning rate and suitable initialization.
no code implementations • 31 May 2021 • Kulin Shah, Pooja Gupta, Amit Deshpande, Chiranjib Bhattacharyya
Given any score function or feature representation and only its second-order statistics on the sensitive sub-populations, we seek a threshold classifier on the given score or a linear threshold classifier on the given feature representation that achieves the Rawls error rate restricted to this hypothesis class.
no code implementations • 20 Mar 2021 • Amit Deshpande, Rameshwar Pratap
Our ideas also extend to give a reduction in the number of passes required by adaptive sampling algorithms for $\ell_{p}$ subspace approximation and subset selection, for $p \geq 2$.
no code implementations • 1 Jan 2021 • Kulin Shah, Amit Deshpande, Navin Goyal
In supervised learning, it is known that overparameterized neural networks with one hidden layer provably and efficiently learn and generalize, when trained using Stochastic Gradient Descent (SGD).
no code implementations • 21 Dec 2020 • Naman Goel, Alfonso Amayuelas, Amit Deshpande, Amit Sharma
For example, in multi-stage settings where decisions are made in multiple screening rounds, we use our framework to derive the minimal distributions required to design a fair algorithm.
2 code implementations • 24 Sep 2020 • Sruthi Gorantla, Amit Deshpande, Anand Louis
We give a fair ranking algorithm that takes any given ranking and outputs another ranking with simultaneous underranking and group fairness guarantees comparable to the lower bound we prove.
no code implementations • 30 Jun 2020 • Amit Deshpande, Rameshwar Pratap
Any multiplicative approximation algorithm for the subspace approximation problem with outliers must solve the robust subspace recovery problem, a special case in which the $(1-\alpha)n$ inliers in the optimal solution are promised to lie exactly on a $k$-dimensional linear subspace.
no code implementations • 20 Jun 2020 • Sandesh Kamath, Amit Deshpande, K V Subrahmanyam
We observe that networks trained with constant learning rate to batch size ratio, as proposed in Jastrzebski et al., yield models which generalize well and also have almost constant adversarial robustness, independent of the batch size.
no code implementations • 8 Jun 2020 • Sandesh Kamath, Amit Deshpande, K V Subrahmanyam
Recent work by authors arXiv:2002. 11318 studies a trade-off between invariance and robustness to adversarial attacks.
1 code implementation • 18 May 2020 • Sandesh Kamath, Amit Deshpande, K V Subrahmanyam, Vineeth N Balasubramanian
For VGG16 and VGG19 models trained on ImageNet, our simple universalization of Gradient, FGSM, and DeepFool perturbations using a test sample of 64 images gives fooling rates comparable to state-of-the-art universal attacks \cite{Dezfooli17, Khrulkov18} for reasonable norms of perturbation.
1 code implementation • NeurIPS 2021 • Sandesh Kamath, Amit Deshpande, K V Subrahmanyam, Vineeth N Balasubramanian
(Non-)robustness of neural networks to small, adversarial pixel-wise perturbations, and as more recently shown, to even random spatial transformations (e. g., translations, rotations) entreats both theoretical and empirical understanding.
no code implementations • 25 Sep 2019 • Sandesh Kamath, Amit Deshpande, K V Subrahmanyam
We observe that the rotation invariance of equivariant models (StdCNNs and GCNNs) improves by training augmentation with progressively larger rotations but while doing so, their adversarial robustness does not improve, or worse, it can even drop significantly on datasets such as MNIST.
no code implementations • 25 Sep 2019 • Amit Deshpande, Sandesh Kamath, K V Subrahmanyam
We evaluate the error rates and fooling rates of three universal attacks, SVD-Gradient, SVD-DeepFool and SVD-FGSM, on state of the art neural networks.
no code implementations • 3 Sep 2019 • Arpita Biswas, Siddharth Barman, Amit Deshpande, Amit Sharma
To quantify this bias, we propose a general notion of $\eta$-infra-marginality that can be used to evaluate the extent of this bias.
no code implementations • 27 Sep 2018 • Amit Deshpande, Sandesh Kamath, K V Subrahmanyam
In this paper, we observe an interesting spectral property shared by all of the above input-dependent, pixel-wise adversarial attacks on translation and rotation-equivariant networks.
no code implementations • 28 Apr 2018 • Amit Deshpande, Anand Louis, Apoorv Vikram Singh
On the hardness side we show that for any $\alpha' > 1$, there exists an $\alpha \leq \alpha'$, $(\alpha >1)$, and an $\varepsilon_0 > 0$ such that minimizing the $k$-means objective over clusterings that satisfy $\alpha$-center proximity is NP-hard to approximate within a multiplicative $(1+\varepsilon_0)$ factor.
1 code implementation • ICML 2018 • L. Elisa Celis, Vijay Keswani, Damian Straszak, Amit Deshpande, Tarun Kathuria, Nisheeth K. Vishnoi
Sampling methods that choose a subset of the data proportional to its diversity in the feature space are popular for data summarization.
no code implementations • ICLR 2018 • Amit Deshpande, Navin Goyal, Sushrut Karmalkar
We show a similar separation between the expressive power of depth-2 and depth-3 sigmoidal neural networks over a large class of input distributions, as long as the weights are polynomially bounded.
no code implementations • NeurIPS 2016 • Tarun Kathuria, Amit Deshpande, Pushmeet Kohli
Gaussian Process bandit optimization has emerged as a powerful tool for optimizing noisy black box functions.
no code implementations • 23 Oct 2016 • L. Elisa Celis, Amit Deshpande, Tarun Kathuria, Nisheeth K. Vishnoi
However, in doing so, a question that seems to be overlooked is whether it is possible to produce fair subsamples that are also adequately representative of the feature space of the data set - an important and classic requirement in machine learning.
no code implementations • 1 Aug 2016 • L. Elisa Celis, Amit Deshpande, Tarun Kathuria, Damian Straszak, Nisheeth K. Vishnoi
Consequently, we obtain a few algorithms of independent interest: 1) to count over the base polytope of regular matroids when there are additional (succinct) budget constraints and, 2) to evaluate and compute the mixed characteristic polynomials, that played a central role in the resolution of the Kadison-Singer problem, for certain special cases.
no code implementations • 6 Jul 2016 • Tarun Kathuria, Amit Deshpande
When pairwise similarities are captured by a kernel, the determinants of submatrices provide a measure of diversity or independence of items within a subset.