1 code implementation • 21 Aug 2023 • Lue Tao, Yu-Xuan Huang, Wang-Zhou Dai, Yuan Jiang
Neuro-symbolic hybrid systems are promising for integrating machine learning and symbolic reasoning, where perception models are facilitated with information inferred from a symbolic knowledge base through logical reasoning.
3 code implementations • 17 Jun 2022 • Hongxin Wei, Lue Tao, Renchunzi Xie, Lei Feng, Bo An
Deep neural networks usually perform poorly when the training dataset suffers from extreme class imbalance.
1 code implementation • 31 Jan 2022 • Lue Tao, Lei Feng, Hongxin Wei, JinFeng Yi, Sheng-Jun Huang, Songcan Chen
Under this threat, we show that adversarial training using a conventional defense budget $\epsilon$ provably fails to provide test robustness in a simple statistical setting, where the non-robust features of the training data can be reinforced by $\epsilon$-bounded perturbation.
no code implementations • 29 Sep 2021 • Hongxin Wei, Lue Tao, Renchunzi Xie, Lei Feng, Bo An
Deep neural networks usually perform poorly when the training dataset suffers from extreme class imbalance.
4 code implementations • NeurIPS 2021 • Hongxin Wei, Lue Tao, Renchunzi Xie, Bo An
Learning with noisy labels is a practically challenging problem in weakly supervised learning.
no code implementations • 27 Mar 2021 • Kun-Peng Ning, Lue Tao, Songcan Chen, Sheng-Jun Huang
Recently, much research has been devoted to improving the model robustness by training with noise perturbations.
2 code implementations • NeurIPS 2021 • Lue Tao, Lei Feng, JinFeng Yi, Sheng-Jun Huang, Songcan Chen
Delusive attacks aim to substantially deteriorate the test accuracy of the learning model by slightly perturbing the features of correctly labeled training examples.
1 code implementation • 29 Dec 2020 • Lue Tao, Lei Feng, JinFeng Yi, Songcan Chen
In this paper, we unveil the threat of hypocritical examples -- inputs that are originally misclassified yet perturbed by a false friend to force correct predictions.
no code implementations • 28 Sep 2020 • Lue Tao, Songcan Chen
In this paper, we formalize the hypocritical risk for the first time and propose a defense method specialized for hypocritical examples by minimizing the tradeoff between natural risk and an upper bound of hypocritical risk.
1 code implementation • ICML 2020 • Feihu Huang, Lue Tao, Songcan Chen
To relax the large batches required in the Acc-SZOFW, we further propose a novel accelerated stochastic zeroth-order Frank-Wolfe (Acc-SZOFW*) based on a new variance reduced technique of STORM, which still reaches the function query complexity of $O(d\epsilon^{-3})$ in the stochastic problem without relying on any large batches.
no code implementations • 12 Aug 2019 • Chuanxing Geng, Lue Tao, Songcan Chen
On the other hand, for G-OSR, introducing such semantic information of known classes not only improves the recognition performance but also endows OSR with the cognitive ability of unknown classes.