Search Results for author: Chen Dan

Found 17 papers, 7 papers with code

Optimal Statistical Guaratees for Adversarially Robust Gaussian Classification

no code implementations ICML 2020 Chen Dan, Yuting Wei, Pradeep Ravikumar

In this paper, we provide the first result of the \emph{optimal} minimax guarantees for the excess risk for adversarially robust classification, under Gaussian mixture model proposed by \cite{schmidt2018adversarially}.

Adversarial Robustness Classification +2

Understanding Why Generalized Reweighting Does Not Improve Over ERM

1 code implementation28 Jan 2022 Runtian Zhai, Chen Dan, Zico Kolter, Pradeep Ravikumar

Together, our results show that a broad category of what we term GRW approaches are not able to achieve distributionally robust generalization.

Boosted CVaR Classification

1 code implementation NeurIPS 2021 Runtian Zhai, Chen Dan, Arun Sai Suggala, Zico Kolter, Pradeep Ravikumar

To learn such randomized classifiers, we propose the Boosted CVaR Classification framework which is motivated by a direct relationship between CVaR and a classical boosting algorithm called LPBoost.

Classification Decision Making +1

Understanding Overfitting in Reweighting Algorithms for Worst-group Performance

no code implementations29 Sep 2021 Runtian Zhai, Chen Dan, J Zico Kolter, Pradeep Kumar Ravikumar

Prior work has proposed various reweighting algorithms to improve the worst-group performance of machine learning models for fairness.

Data Augmentation Fairness

DORO: Distributional and Outlier Robust Optimization

1 code implementation11 Jun 2021 Runtian Zhai, Chen Dan, J. Zico Kolter, Pradeep Ravikumar

Many machine learning tasks involve subpopulation shift where the testing data distribution is a subpopulation of the training distribution.

Fundamental Limits and Tradeoffs in Invariant Representation Learning

no code implementations19 Dec 2020 Han Zhao, Chen Dan, Bryon Aragam, Tommi S. Jaakkola, Geoffrey J. Gordon, Pradeep Ravikumar

A wide range of machine learning applications such as privacy-preserving learning, algorithmic fairness, and domain adaptation/generalization among others, involve learning \emph{invariant representations} of the data that aim to achieve two competing goals: (a) maximize information or accuracy with respect to a target response, and (b) maximize invariance or independence with respect to a set of protected features (e. g.\ for fairness, privacy, etc).

Domain Adaptation Fairness +3

Sharp Statistical Guarantees for Adversarially Robust Gaussian Classification

no code implementations29 Jun 2020 Chen Dan, Yuting Wei, Pradeep Ravikumar

In this paper, we provide the first result of the optimal minimax guarantees for the excess risk for adversarially robust classification, under Gaussian mixture model proposed by \cite{schmidt2018adversarially}.

Adversarial Robustness Classification +2

Learning Complexity of Simulated Annealing

no code implementations6 Mar 2020 Avrim Blum, Chen Dan, Saeed Seddighin

A key component that plays a crucial role in the performance of simulated annealing is the criteria under which the temperature changes namely, the cooling schedule.

MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius

2 code implementations ICLR 2020 Runtian Zhai, Chen Dan, Di He, huan zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, Li-Wei Wang

Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly.

Optimal Analysis of Subset-Selection Based L_p Low-Rank Approximation

no code implementations NeurIPS 2019 Chen Dan, Hong Wang, Hongyang Zhang, Yuchen Zhou, Pradeep K. Ravikumar

We show that this algorithm has an approximation ratio of $O((k+1)^{1/p})$ for $1\le p\le 2$ and $O((k+1)^{1-1/p})$ for $p\ge 2$.

Optimal Analysis of Subset-Selection Based L_p Low Rank Approximation

no code implementations30 Oct 2019 Chen Dan, Hong Wang, Hongyang Zhang, Yuchen Zhou, Pradeep Ravikumar

We show that this algorithm has an approximation ratio of $O((k+1)^{1/p})$ for $1\le p\le 2$ and $O((k+1)^{1-1/p})$ for $p\ge 2$.

Learning Sparse Nonparametric DAGs

2 code implementations29 Sep 2019 Xun Zheng, Chen Dan, Bryon Aragam, Pradeep Ravikumar, Eric P. Xing

We develop a framework for learning sparse nonparametric directed acyclic graphs (DAGs) from data.

Causal Discovery

Adversarially Robust Generalization Just Requires More Unlabeled Data

1 code implementation3 Jun 2019 Runtian Zhai, Tianle Cai, Di He, Chen Dan, Kun He, John Hopcroft, Li-Wei Wang

Neural network robustness has recently been highlighted by the existence of adversarial examples.

The Sample Complexity of Semi-Supervised Learning with Nonparametric Mixture Models

no code implementations NeurIPS 2018 Chen Dan, Liu Leqi, Bryon Aragam, Pradeep K. Ravikumar, Eric P. Xing

We study the sample complexity of semi-supervised learning (SSL) and introduce new assumptions based on the mismatch between a mixture model learned from unlabeled data and the true mixture model induced by the (unknown) class conditional distributions.

Classification General Classification +1

Sample Complexity of Nonparametric Semi-Supervised Learning

no code implementations NeurIPS 2018 Chen Dan, Liu Leqi, Bryon Aragam, Pradeep Ravikumar, Eric P. Xing

We study the sample complexity of semi-supervised learning (SSL) and introduce new assumptions based on the mismatch between a mixture model learned from unlabeled data and the true mixture model induced by the (unknown) class conditional distributions.

Classification General Classification +1

Identifiability of Nonparametric Mixture Models and Bayes Optimal Clustering

no code implementations12 Feb 2018 Bryon Aragam, Chen Dan, Eric P. Xing, Pradeep Ravikumar

Motivated by problems in data clustering, we establish general conditions under which families of nonparametric mixture models are identifiable, by introducing a novel framework involving clustering overfitted \emph{parametric} (i. e. misspecified) mixture models.

Nonparametric Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.