1 code implementation • 3 Jun 2019 • Runtian Zhai, Tianle Cai, Di He, Chen Dan, Kun He, John Hopcroft, Li-Wei Wang
Neural network robustness has recently been highlighted by the existence of adversarial examples.
2 code implementations • ICLR 2020 • Runtian Zhai, Chen Dan, Di He, huan zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, Li-Wei Wang
Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly.
no code implementations • 24 Jul 2020 • Yunzhen Feng, Runtian Zhai, Di He, Li-Wei Wang, Bin Dong
Our experiments show that TD can provide fine-grained information for varied downstream tasks, and for the models trained from different initializations, the learned features are not the same in terms of downstream-task predictions.
no code implementations • 1 Jan 2021 • Lei Wang, Runtian Zhai, Di He, LiWei Wang, Li Jian
For certification, we carefully allocate specific robust regions for each test sample.
1 code implementation • 11 Jun 2021 • Runtian Zhai, Chen Dan, J. Zico Kolter, Pradeep Ravikumar
Many machine learning tasks involve subpopulation shift where the testing data distribution is a subpopulation of the training distribution.
no code implementations • 29 Sep 2021 • Runtian Zhai, Chen Dan, J Zico Kolter, Pradeep Kumar Ravikumar
Prior work has proposed various reweighting algorithms to improve the worst-group performance of machine learning models for fairness.
1 code implementation • NeurIPS 2021 • Runtian Zhai, Chen Dan, Arun Sai Suggala, Zico Kolter, Pradeep Ravikumar
To learn such randomized classifiers, we propose the Boosted CVaR Classification framework which is motivated by a direct relationship between CVaR and a classical boosting algorithm called LPBoost.
1 code implementation • 28 Jan 2022 • Runtian Zhai, Chen Dan, Zico Kolter, Pradeep Ravikumar
Together, our results show that a broad category of what we term GRW approaches are not able to achieve distributionally robust generalization.
no code implementations • 10 Feb 2023 • Yuzhe Lu, Zhenlin Wang, Runtian Zhai, Soheil Kolouri, Joseph Campbell, Katia Sycara
Out-of-distribution (OOD) data poses serious challenges in deployed machine learning models as even subtle changes could incur significant performance drops.
no code implementations • 1 Jun 2023 • Runtian Zhai, Bingbin Liu, Andrej Risteski, Zico Kolter, Pradeep Ravikumar
Recent work has built the connection between self-supervised learning and the approximation of the top eigenspace of a graph Laplacian operator, suggesting that learning a linear probe atop such representation can be connected to RKHS regression.
no code implementations • 1 Feb 2024 • Runtian Zhai, Rattana Pukdee, Roger Jin, Maria-Florina Balcan, Pradeep Ravikumar
Unlabeled data is a key component of modern machine learning.