no code implementations • 28 Mar 2024 • Anqi Mao, Mehryar Mohri, Yutao Zhong
For these functions, we derive cost-sensitive comp-sum and constrained surrogate losses, establishing their $H$-consistency bounds and Bayes-consistency.
no code implementations • 28 Mar 2024 • Anqi Mao, Mehryar Mohri, Yutao Zhong
Next, we prove a series of novel $H$-consistency bounds for surrogate loss functions of the squared loss, under the assumption of a symmetric distribution and a bounded hypothesis set.
no code implementations • 28 Mar 2024 • Anqi Mao, Mehryar Mohri, Yutao Zhong
In this work, we introduce a novel framework of regression with deferral, which involves deferring the prediction to multiple experts.
no code implementations • 23 Oct 2023 • Anqi Mao, Mehryar Mohri, Yutao Zhong
We present a study of surrogate losses and algorithms for the general problem of learning to defer with multiple experts.
no code implementations • 23 Oct 2023 • Anqi Mao, Mehryar Mohri, Yutao Zhong
We introduce new families of surrogate losses for the abstention loss function, which include the state-of-the-art surrogate losses in the single-stage setting and a novel family of loss functions in the two-stage setting.
no code implementations • 23 Oct 2023 • Anqi Mao, Mehryar Mohri, Yutao Zhong
We study the key framework of learning with abstention in the multi-class classification setting.
no code implementations • 5 Jul 2023 • Anqi Mao, Mehryar Mohri, Yutao Zhong
We introduce a novel framework of ranking with abstention, where the learner can abstain from making prediction at some limited cost $c$.
no code implementations • 15 Jun 2023 • Raef Bassily, Corinna Cortes, Anqi Mao, Mehryar Mohri
This is the modern problem of supervised domain adaptation from a public source to a private target domain.
no code implementations • 14 Apr 2023 • Anqi Mao, Mehryar Mohri, Yutao Zhong
These are non-asymptotic guarantees that upper bound the zero-one loss estimation error in terms of the estimation error of a surrogate loss, for the specific hypothesis set $H$ used.
no code implementations • 16 May 2022 • Pranjal Awasthi, Anqi Mao, Mehryar Mohri, Yutao Zhong
We also show that previous excess error bounds can be recovered as special cases of our general results.
no code implementations • 4 May 2021 • Pranjal Awasthi, Anqi Mao, Mehryar Mohri, Yutao Zhong
Moreover, our calibration results, combined with the previous study of consistency by Awasthi et al. (2021), also lead to more general $H$-consistency results covering common hypothesis sets.
no code implementations • NeurIPS 2021 • Pranjal Awasthi, Natalie Frank, Anqi Mao, Mehryar Mohri, Yutao Zhong
We then give a characterization of H-calibration and prove that some surrogate losses are indeed H-calibrated for the adversarial loss, with these hypothesis sets.
no code implementations • 7 May 2019 • Yingzhou Li, Jianfeng Lu, Anqi Mao
A novel solve-training framework is proposed to train neural network in representing low dimensional solution maps of physical models.