no code implementations • ICML 2020 • Chen Dan, Yuting Wei, Pradeep Ravikumar
In this paper, we provide the first result of the \emph{optimal} minimax guarantees for the excess risk for adversarially robust classification, under Gaussian mixture model proposed by \cite{schmidt2018adversarially}.
no code implementations • 8 Nov 2023 • Anubhav Bhatti, Yuwei Liu, Chen Dan, Bingjie Shen, San Lee, Yonghwan Kim, Jang Yong Kim
This paper uses state-of-the-art deep learning (DL) architectures to introduce a multi-step forecasting system to predict vital signs indicative of septic shock progression in Intensive Care Units (ICUs).
1 code implementation • 28 Jan 2022 • Runtian Zhai, Chen Dan, Zico Kolter, Pradeep Ravikumar
Together, our results show that a broad category of what we term GRW approaches are not able to achieve distributionally robust generalization.
1 code implementation • NeurIPS 2021 • Runtian Zhai, Chen Dan, Arun Sai Suggala, Zico Kolter, Pradeep Ravikumar
To learn such randomized classifiers, we propose the Boosted CVaR Classification framework which is motivated by a direct relationship between CVaR and a classical boosting algorithm called LPBoost.
no code implementations • 29 Sep 2021 • Runtian Zhai, Chen Dan, J Zico Kolter, Pradeep Kumar Ravikumar
Prior work has proposed various reweighting algorithms to improve the worst-group performance of machine learning models for fairness.
1 code implementation • 11 Jun 2021 • Runtian Zhai, Chen Dan, J. Zico Kolter, Pradeep Ravikumar
Many machine learning tasks involve subpopulation shift where the testing data distribution is a subpopulation of the training distribution.
no code implementations • NeurIPS 2023 • Han Zhao, Chen Dan, Bryon Aragam, Tommi S. Jaakkola, Geoffrey J. Gordon, Pradeep Ravikumar
A wide range of machine learning applications such as privacy-preserving learning, algorithmic fairness, and domain adaptation/generalization among others, involve learning invariant representations of the data that aim to achieve two competing goals: (a) maximize information or accuracy with respect to a target response, and (b) maximize invariance or independence with respect to a set of protected features (e. g., for fairness, privacy, etc).
no code implementations • 29 Jun 2020 • Chen Dan, Yuting Wei, Pradeep Ravikumar
In this paper, we provide the first result of the optimal minimax guarantees for the excess risk for adversarially robust classification, under Gaussian mixture model proposed by \cite{schmidt2018adversarially}.
1 code implementation • ICML 2020 • Ziyu Xu, Chen Dan, Justin Khim, Pradeep Ravikumar
We define a robust risk that minimizes risk over a set of weightings and show excess risk bounds for this problem.
no code implementations • 6 Mar 2020 • Avrim Blum, Chen Dan, Saeed Seddighin
A key component that plays a crucial role in the performance of simulated annealing is the criteria under which the temperature changes namely, the cooling schedule.
2 code implementations • ICLR 2020 • Runtian Zhai, Chen Dan, Di He, huan zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, Li-Wei Wang
Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly.
no code implementations • NeurIPS 2019 • Chen Dan, Hong Wang, Hongyang Zhang, Yuchen Zhou, Pradeep K. Ravikumar
We show that this algorithm has an approximation ratio of $O((k+1)^{1/p})$ for $1\le p\le 2$ and $O((k+1)^{1-1/p})$ for $p\ge 2$.
no code implementations • 30 Oct 2019 • Chen Dan, Hong Wang, Hongyang Zhang, Yuchen Zhou, Pradeep Ravikumar
We show that this algorithm has an approximation ratio of $O((k+1)^{1/p})$ for $1\le p\le 2$ and $O((k+1)^{1-1/p})$ for $p\ge 2$.
2 code implementations • 29 Sep 2019 • Xun Zheng, Chen Dan, Bryon Aragam, Pradeep Ravikumar, Eric P. Xing
We develop a framework for learning sparse nonparametric directed acyclic graphs (DAGs) from data.
1 code implementation • 3 Jun 2019 • Runtian Zhai, Tianle Cai, Di He, Chen Dan, Kun He, John Hopcroft, Li-Wei Wang
Neural network robustness has recently been highlighted by the existence of adversarial examples.
no code implementations • NeurIPS 2018 • Chen Dan, Liu Leqi, Bryon Aragam, Pradeep K. Ravikumar, Eric P. Xing
We study the sample complexity of semi-supervised learning (SSL) and introduce new assumptions based on the mismatch between a mixture model learned from unlabeled data and the true mixture model induced by the (unknown) class conditional distributions.
no code implementations • NeurIPS 2018 • Chen Dan, Liu Leqi, Bryon Aragam, Pradeep Ravikumar, Eric P. Xing
We study the sample complexity of semi-supervised learning (SSL) and introduce new assumptions based on the mismatch between a mixture model learned from unlabeled data and the true mixture model induced by the (unknown) class conditional distributions.
no code implementations • 12 Feb 2018 • Bryon Aragam, Chen Dan, Eric P. Xing, Pradeep Ravikumar
Motivated by problems in data clustering, we establish general conditions under which families of nonparametric mixture models are identifiable, by introducing a novel framework involving clustering overfitted \emph{parametric} (i. e. misspecified) mixture models.