Search Results for author: Yutao Zhong

Found 11 papers, 0 papers with code

Regression with Multi-Expert Deferral

no code implementations28 Mar 2024 Anqi Mao, Mehryar Mohri, Yutao Zhong

In this work, we introduce a novel framework of regression with deferral, which involves deferring the prediction to multiple experts.

regression

$H$-Consistency Guarantees for Regression

no code implementations28 Mar 2024 Anqi Mao, Mehryar Mohri, Yutao Zhong

Next, we prove a series of novel $H$-consistency bounds for surrogate loss functions of the squared loss, under the assumption of a symmetric distribution and a bounded hypothesis set.

regression

Top-$k$ Classification and Cardinality-Aware Prediction

no code implementations28 Mar 2024 Anqi Mao, Mehryar Mohri, Yutao Zhong

For these functions, we derive cost-sensitive comp-sum and constrained surrogate losses, establishing their $H$-consistency bounds and Bayes-consistency.

Classification Multi-class Classification

Principled Approaches for Learning to Defer with Multiple Experts

no code implementations23 Oct 2023 Anqi Mao, Mehryar Mohri, Yutao Zhong

We present a study of surrogate losses and algorithms for the general problem of learning to defer with multiple experts.

Predictor-Rejector Multi-Class Abstention: Theoretical Analysis and Algorithms

no code implementations23 Oct 2023 Anqi Mao, Mehryar Mohri, Yutao Zhong

We study the key framework of learning with abstention in the multi-class classification setting.

Multi-class Classification

Theoretically Grounded Loss Functions and Algorithms for Score-Based Multi-Class Abstention

no code implementations23 Oct 2023 Anqi Mao, Mehryar Mohri, Yutao Zhong

We introduce new families of surrogate losses for the abstention loss function, which include the state-of-the-art surrogate losses in the single-stage setting and a novel family of loss functions in the two-stage setting.

Multi-class Classification

Ranking with Abstention

no code implementations5 Jul 2023 Anqi Mao, Mehryar Mohri, Yutao Zhong

We introduce a novel framework of ranking with abstention, where the learner can abstain from making prediction at some limited cost $c$.

Cross-Entropy Loss Functions: Theoretical Analysis and Applications

no code implementations14 Apr 2023 Anqi Mao, Mehryar Mohri, Yutao Zhong

These are non-asymptotic guarantees that upper bound the zero-one loss estimation error in terms of the estimation error of a surrogate loss, for the specific hypothesis set $H$ used.

Adversarial Robustness

$\mathscr{H}$-Consistency Estimation Error of Surrogate Loss Minimizers

no code implementations16 May 2022 Pranjal Awasthi, Anqi Mao, Mehryar Mohri, Yutao Zhong

We also show that previous excess error bounds can be recovered as special cases of our general results.

A Finer Calibration Analysis for Adversarial Robustness

no code implementations4 May 2021 Pranjal Awasthi, Anqi Mao, Mehryar Mohri, Yutao Zhong

Moreover, our calibration results, combined with the previous study of consistency by Awasthi et al. (2021), also lead to more general $H$-consistency results covering common hypothesis sets.

Adversarial Robustness BIG-bench Machine Learning +1

Calibration and Consistency of Adversarial Surrogate Losses

no code implementations NeurIPS 2021 Pranjal Awasthi, Natalie Frank, Anqi Mao, Mehryar Mohri, Yutao Zhong

We then give a characterization of H-calibration and prove that some surrogate losses are indeed H-calibrated for the adversarial loss, with these hypothesis sets.

Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.