Search Results for author: Lue Tao

Found 11 papers, 7 papers with code

Deciphering Raw Data in Neuro-Symbolic Learning with Provable Guarantees

1 code implementation21 Aug 2023 Lue Tao, Yu-Xuan Huang, Wang-Zhou Dai, Yuan Jiang

Neuro-symbolic hybrid systems are promising for integrating machine learning and symbolic reasoning, where perception models are facilitated with information inferred from a symbolic knowledge base through logical reasoning.

Logical Reasoning

Open-Sampling: Exploring Out-of-Distribution data for Re-balancing Long-tailed datasets

3 code implementations17 Jun 2022 Hongxin Wei, Lue Tao, Renchunzi Xie, Lei Feng, Bo An

Deep neural networks usually perform poorly when the training dataset suffers from extreme class imbalance.

Can Adversarial Training Be Manipulated By Non-Robust Features?

1 code implementation31 Jan 2022 Lue Tao, Lei Feng, Hongxin Wei, JinFeng Yi, Sheng-Jun Huang, Songcan Chen

Under this threat, we show that adversarial training using a conventional defense budget $\epsilon$ provably fails to provide test robustness in a simple statistical setting, where the non-robust features of the training data can be reinforced by $\epsilon$-bounded perturbation.

Open-sampling: Re-balancing Long-tailed Datasets with Out-of-Distribution Data

no code implementations29 Sep 2021 Hongxin Wei, Lue Tao, Renchunzi Xie, Lei Feng, Bo An

Deep neural networks usually perform poorly when the training dataset suffers from extreme class imbalance.

Improving Model Robustness by Adaptively Correcting Perturbation Levels with Active Queries

no code implementations27 Mar 2021 Kun-Peng Ning, Lue Tao, Songcan Chen, Sheng-Jun Huang

Recently, much research has been devoted to improving the model robustness by training with noise perturbations.

Active Learning

Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training

2 code implementations NeurIPS 2021 Lue Tao, Lei Feng, JinFeng Yi, Sheng-Jun Huang, Songcan Chen

Delusive attacks aim to substantially deteriorate the test accuracy of the learning model by slightly perturbing the features of correctly labeled training examples.

With False Friends Like These, Who Can Notice Mistakes?

1 code implementation29 Dec 2020 Lue Tao, Lei Feng, JinFeng Yi, Songcan Chen

In this paper, we unveil the threat of hypocritical examples -- inputs that are originally misclassified yet perturbed by a false friend to force correct predictions.

With False Friends Like These, Who Can Have Self-Knowledge?

no code implementations28 Sep 2020 Lue Tao, Songcan Chen

In this paper, we formalize the hypocritical risk for the first time and propose a defense method specialized for hypocritical examples by minimizing the tradeoff between natural risk and an upper bound of hypocritical risk.

Accelerated Stochastic Gradient-free and Projection-free Methods

1 code implementation ICML 2020 Feihu Huang, Lue Tao, Songcan Chen

To relax the large batches required in the Acc-SZOFW, we further propose a novel accelerated stochastic zeroth-order Frank-Wolfe (Acc-SZOFW*) based on a new variance reduced technique of STORM, which still reaches the function query complexity of $O(d\epsilon^{-3})$ in the stochastic problem without relying on any large batches.

Adversarial Attack

Visual and Semantic Prototypes-Jointly Guided CNN for Generalized Zero-shot Learning

no code implementations12 Aug 2019 Chuanxing Geng, Lue Tao, Songcan Chen

On the other hand, for G-OSR, introducing such semantic information of known classes not only improves the recognition performance but also endows OSR with the cognitive ability of unknown classes.

Generalized Zero-Shot Learning Open Set Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.