1 code implementation • 11 Dec 2023 • Bingzheng Wang, Guoqiang Wu, Teng Pang, Yan Zhang, Yilong Yin
To address this issue, we propose a method named diffusion adversarial imitation learning (DiffAIL), which introduces the diffusion model into the AIL framework.
1 code implementation • 9 May 2023 • Guoqiang Wu, Chongxuan Li, Yilong Yin
We theoretically identify a critical factor of the dataset affecting the generalization bounds: \emph{the label-wise class imbalance}.
1 code implementation • 5 Feb 2023 • Chenyu Zheng, Guoqiang Wu, Fan Bao, Yue Cao, Chongxuan Li, Jun Zhu
Theoretically, the paper considers the surrogate loss instead of the zero-one loss in analyses and generalizes the classical results from binary cases to multiclass ones.
no code implementations • 30 Apr 2022 • Zhijie Deng, Feng Zhou, Jianfei Chen, Guoqiang Wu, Jun Zhu
In this way, we relate DE to Bayesian inference to enjoy reliable Bayesian uncertainty.
no code implementations • 29 Sep 2021 • Zhijie Deng, Feng Zhou, Jianfei Chen, Guoqiang Wu, Jun Zhu
Deep Ensemble (DE) is a flexible, feasible, and effective alternative to Bayesian neural networks (BNNs) for uncertainty estimation in deep learning.
1 code implementation • NeurIPS 2021 • Shuyu Cheng, Guoqiang Wu, Jun Zhu
Finally, our theoretical results are confirmed by experiments on several numerical benchmarks as well as adversarial attacks.
1 code implementation • NeurIPS 2021 • Fan Bao, Guoqiang Wu, Chongxuan Li, Jun Zhu, Bo Zhang
Our results can explain some mysterious behaviours of the bilevel programming in practice, for instance, overfitting to the validation set.
no code implementations • NeurIPS 2021 • Guoqiang Wu, Chongxuan Li, Kun Xu, Jun Zhu
Our results show that learning algorithms with the consistent univariate loss have an error bound of $O(c)$ ($c$ is the number of labels), while algorithms with the inconsistent pairwise loss depend on $O(\sqrt{c})$ as shown in prior work.
1 code implementation • NeurIPS 2020 • Guoqiang Wu, Jun Zhu
On the other hand, when directly optimizing SA with its surrogate loss, it has learning guarantees that depend on $O(\sqrt{c})$ for both HL and SA measures.
1 code implementation • 5 Nov 2019 • Guoqiang Wu, Ruobing Zheng, Yingjie Tian, Dalian Liu
RBRL inherits the ranking loss minimization advantages of Rank-SVM, and thus overcomes the disadvantages of BR suffering the class-imbalance issue and ignoring the label correlations.