no code implementations • 4 Jun 2024 • Surbhi Goel, Abhishek Shetty, Konstantinos Stavropoulos, Arsen Vasilyan
We study the problem of learning under arbitrary distribution shift, where the learner is trained on a labeled set from one distribution but evaluated on a different, potentially adversarially generated test distribution.
no code implementations • 22 Feb 2024 • Adam Block, Alexander Rakhlin, Abhishek Shetty
In order to circumvent statistical and computational hardness results in sequential decision-making, recent work has considered smoothed online learning, where the distribution of data at each time is assumed to have bounded likeliehood ratio with respect to a base measure when conditioned on the history.
no code implementations • 13 Feb 2024 • Adam Block, Mark Bun, Rathin Desai, Abhishek Shetty, Steven Wu
Due to statistical lower bounds on the learnability of many function classes under privacy constraints, there has been recent interest in leveraging public data to improve the performance of private learning algorithms.
no code implementations • 26 Jan 2024 • Parikshit Gopalan, Princewill Okoroafor, Prasad Raghavendra, Abhishek Shetty, Mihir Singhal
An \textit{omnipredictor} for a class $\mathcal L$ of loss functions and a class $\mathcal C$ of hypotheses is a predictor whose predictions incur less expected loss than the best hypothesis in $\mathcal C$ for every loss in $\mathcal L$.
no code implementations • 21 Sep 2023 • Constantinos Daskalakis, Noah Golowich, Nika Haghtalab, Abhishek Shetty
We show that both weak and strong $\sigma$-smooth Nash equilibria have superior computational properties to Nash equilibria: when $\sigma$ as well as an approximation parameter $\epsilon$ and the number of players are all constants, there is a constant-time randomized algorithm to find a weak $\epsilon$-approximate $\sigma$-smooth Nash equilibrium in normal-form games.
no code implementations • NeurIPS 2023 • Surbhi Goel, Steve Hanneke, Shay Moran, Abhishek Shetty
We study the problem of sequential prediction in the stochastic setting with an adversary that is allowed to inject clean-label adversarial (or out-of-distribution) examples.
no code implementations • 18 Apr 2023 • Ishaq Aden-Ali, Yeshwanth Cherapanamjeri, Abhishek Shetty, Nikita Zhivotovskiy
In this paper, we address this issue by providing optimal high probability risk bounds through a framework that surpasses the limitations of uniform convergence arguments.
no code implementations • 19 Dec 2022 • Ishaq Aden-Ali, Yeshwanth Cherapanamjeri, Abhishek Shetty, Nikita Zhivotovskiy
In one of the first COLT open problems, Warmuth conjectured that this prediction strategy always implies an optimal high probability bound on the risk, and hence is also an optimal PAC algorithm.
no code implementations • 17 Feb 2022 • Nika Haghtalab, Yanjun Han, Abhishek Shetty, Kunhe Yang
For the smoothed analysis setting, our results give the first oracle-efficient algorithm for online learning with smoothed adversaries [HRS22].
1 code implementation • ICLR 2022 • Abhishek Shetty, Raaz Dwivedi, Lester Mackey
Near-optimal thinning procedures achieve this goal by sampling $n$ points from a Markov chain and identifying $\sqrt{n}$ points with $\widetilde{\mathcal{O}}(1/\sqrt{n})$ discrepancy to $\mathbb{P}$.
no code implementations • 16 Feb 2021 • Nika Haghtalab, Tim Roughgarden, Abhishek Shetty
-Online discrepancy minimization: We consider the online Koml\'os problem, where the input is generated from an adaptive sequence of $\sigma$-smooth and isotropic distributions on the $\ell_2$ unit ball.
no code implementations • NeurIPS 2020 • Nika Haghtalab, Tim Roughgarden, Abhishek Shetty
Practical and pervasive needs for robustness and privacy in algorithms have inspired the design of online adversarial and differentially private learning algorithms.
no code implementations • ICLR 2020 • Abhishek Panigrahi, Abhishek Shetty, Navin Goyal
In the present paper, we provide theoretical results about the effect of activation function on the training of highly overparametrized 2-layer neural networks.
no code implementations • 13 Jul 2018 • Navin Goyal, Abhishek Shetty
NGCA is also related to dimension reduction and to other data analysis problems such as ICA.
no code implementations • 12 Jun 2018 • Sudeep Raja Putta, Abhishek Shetty
This problem is equivalent to OLO on the $\{0, 1\}^n$ hypercube.