Search Results for author: Runtian Zhai

Found 8 papers, 5 papers with code

Understanding Why Generalized Reweighting Does Not Improve Over ERM

1 code implementation28 Jan 2022 Runtian Zhai, Chen Dan, Zico Kolter, Pradeep Ravikumar

Together, our results show that a broad category of what we term GRW approaches are not able to achieve distributionally robust generalization.

Boosted CVaR Classification

1 code implementation NeurIPS 2021 Runtian Zhai, Chen Dan, Arun Sai Suggala, Zico Kolter, Pradeep Ravikumar

To learn such randomized classifiers, we propose the Boosted CVaR Classification framework which is motivated by a direct relationship between CVaR and a classical boosting algorithm called LPBoost.

Classification Decision Making +1

Understanding Overfitting in Reweighting Algorithms for Worst-group Performance

no code implementations29 Sep 2021 Runtian Zhai, Chen Dan, J Zico Kolter, Pradeep Kumar Ravikumar

Prior work has proposed various reweighting algorithms to improve the worst-group performance of machine learning models for fairness.

Data Augmentation Fairness

DORO: Distributional and Outlier Robust Optimization

1 code implementation11 Jun 2021 Runtian Zhai, Chen Dan, J. Zico Kolter, Pradeep Ravikumar

Many machine learning tasks involve subpopulation shift where the testing data distribution is a subpopulation of the training distribution.

Pretrain-to-Finetune Adversarial Training via Sample-wise Randomized Smoothing

no code implementations1 Jan 2021 Lei Wang, Runtian Zhai, Di He, LiWei Wang, Li Jian

For certification, we carefully allocate specific robust regions for each test sample.

Transferred Discrepancy: Quantifying the Difference Between Representations

no code implementations24 Jul 2020 Yunzhen Feng, Runtian Zhai, Di He, Li-Wei Wang, Bin Dong

Our experiments show that TD can provide fine-grained information for varied downstream tasks, and for the models trained from different initializations, the learned features are not the same in terms of downstream-task predictions.

MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius

2 code implementations ICLR 2020 Runtian Zhai, Chen Dan, Di He, huan zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, Li-Wei Wang

Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly.

Adversarially Robust Generalization Just Requires More Unlabeled Data

1 code implementation3 Jun 2019 Runtian Zhai, Tianle Cai, Di He, Chen Dan, Kun He, John Hopcroft, Li-Wei Wang

Neural network robustness has recently been highlighted by the existence of adversarial examples.

Cannot find the paper you are looking for? You can Submit a new open access paper.