Together, our results show that a broad category of what we term GRW approaches are not able to achieve distributionally robust generalization.
To learn such randomized classifiers, we propose the Boosted CVaR Classification framework which is motivated by a direct relationship between CVaR and a classical boosting algorithm called LPBoost.
Prior work has proposed various reweighting algorithms to improve the worst-group performance of machine learning models for fairness.
Many machine learning tasks involve subpopulation shift where the testing data distribution is a subpopulation of the training distribution.
Our experiments show that TD can provide fine-grained information for varied downstream tasks, and for the models trained from different initializations, the learned features are not the same in terms of downstream-task predictions.
Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly.