no code implementations • 7 Jun 2018 • Arun Sai Suggala, Adarsh Prasad, Vaishnavh Nagarajan, Pradeep Ravikumar
Based on the modified definition, we show that there is no trade-off between adversarial and standard accuracies; there exist classifiers that are robust and achieve high standard accuracy.
no code implementations • 19 Feb 2018 • Adarsh Prasad, Arun Sai Suggala, Sivaraman Balakrishnan, Pradeep Ravikumar
We provide a new computationally-efficient class of estimators for risk minimization.
no code implementations • NeurIPS 2014 • Adarsh Prasad, Stefanie Jegelka, Dhruv Batra
To cope with the high level of ambiguity faced in domains such as Computer Vision or Natural Language processing, robust prediction methods often search for a diverse set of high-quality candidate solutions or proposals.
no code implementations • NeurIPS 2018 • Arun Suggala, Adarsh Prasad, Pradeep K. Ravikumar
We study the implicit regularization properties of optimization techniques by explicitly connecting their optimization paths to the regularization paths of ``corresponding'' regularized problems.
no code implementations • NeurIPS 2017 • Adarsh Prasad, Alexandru Niculescu-Mizil, Pradeep K. Ravikumar
We revisit the classical analysis of generative vs discriminative models for general exponential families, and high-dimensional settings.
no code implementations • NeurIPS 2015 • Tianyang Li, Adarsh Prasad, Pradeep K. Ravikumar
We consider the problem of binary classification when the covariates conditioned on the each of the response values follow multivariate Gaussian distributions.
no code implementations • 1 Jul 2019 • Adarsh Prasad, Sivaraman Balakrishnan, Pradeep Ravikumar
Building on this connection, we provide a simple variant of recent computationally-efficient algorithms for mean estimation in Huber's model, which given our connection entails that the same efficient sample-pruning based estimators is simultaneously robust to heavy-tailed noise and Huber contamination.
no code implementations • NeurIPS 2019 • Liu Leqi, Adarsh Prasad, Pradeep K. Ravikumar
The statistical decision theoretic foundations of modern machine learning have largely focused on the minimization of the expectation of some loss function for a given task.
no code implementations • 19 Jun 2020 • Kartik Gupta, Arun Sai Suggala, Adarsh Prasad, Praneeth Netrapalli, Pradeep Ravikumar
We view the problem of designing minimax estimators as finding a mixed strategy Nash equilibrium of a zero-sum game.
no code implementations • 29 Jun 2020 • Ainesh Bakshi, Adarsh Prasad
We obtain robust and computationally efficient estimators for learning several linear models that achieve statistically optimal convergence rate under minimal distributional assumptions.
no code implementations • ICML 2020 • Liu Leqi, Justin Khim, Adarsh Prasad, Pradeep Ravikumar
In this work, we study a novel notion of L-Risk based on the classical idea of rank-weighted learning.
no code implementations • ICML 2020 • Liu Leqi, Justin Khim, Adarsh Prasad, Pradeep Ravikumar
In this work, we study a novel notion of L-Risk based on the classical idea of rank-weighted learning.
no code implementations • 1 Jan 2021 • Vishwak Srinivasan, Adarsh Prasad, Sivaraman Balakrishnan, Pradeep Kumar Ravikumar
A dramatic improvement in data collection technologies has aided in procuring massive amounts of unstructured and heterogeneous datasets.
no code implementations • NeurIPS 2020 • Adarsh Prasad, Vishwak Srinivasan, Sivaraman Balakrishnan, Pradeep Ravikumar
We study the problem of learning Ising models in a setting where some of the samples from the underlying distribution can be arbitrarily corrupted.
no code implementations • 20 Feb 2021 • Saurabh Garg, Joshua Zhanson, Emilio Parisotto, Adarsh Prasad, J. Zico Kolter, Zachary C. Lipton, Sivaraman Balakrishnan, Ruslan Salakhutdinov, Pradeep Ravikumar
In this paper, we present a detailed empirical study to characterize the heavy-tailed nature of the gradients of the PPO surrogate reward function.
no code implementations • 25 Aug 2021 • Che-Ping Tsai, Adarsh Prasad, Sivaraman Balakrishnan, Pradeep Ravikumar
We consider the task of heavy-tailed statistical estimation given streaming $p$-dimensional samples.