Search Results for author: Tao Bai

Found 17 papers, 1 papers with code

Towards Adversarially Robust Continual Learning

no code implementations31 Mar 2023 Tao Bai, Chen Chen, Lingjuan Lyu, Jun Zhao, Bihan Wen

Recent studies show that models trained by continual learning can achieve the comparable performances as the standard supervised learning and the learning flexibility of continual learning models enables their wide applications in the real world.

Adversarial Robustness Continual Learning

AI Security for Geoscience and Remote Sensing: Challenges and Future Trends

no code implementations19 Dec 2022 Yonghao Xu, Tao Bai, Weikang Yu, Shizhen Chang, Peter M. Atkinson, Pedram Ghamisi

Recent advances in artificial intelligence (AI) have significantly intensified research in the geoscience and remote sensing (RS) field.

Backdoor Attack Denoising +7

Bayesian Evidential Learning for Few-Shot Classification

no code implementations19 Jul 2022 Xiongkun Linghu, Yan Bai, Yihang Lou, Shengsen Wu, Jinze Li, Jianzhong He, Tao Bai

Few-Shot Classification(FSC) aims to generalize from base classes to novel classes given very limited labeled samples, which is an important step on the path toward human-like machine learning.

Classification Metric Learning +1

Memory-Based Label-Text Tuning for Few-Shot Class-Incremental Learning

no code implementations3 Jul 2022 Jinze Li, Yan Bai, Yihang Lou, Xiongkun Linghu, Jianzhong He, Shaoyun Xu, Tao Bai

The difficulties are that training on a sequence of limited data from new tasks leads to severe overfitting issues and causes the well-known catastrophic forgetting problem.

Few-Shot Class-Incremental Learning Incremental Learning

Geometric Anchor Correspondence Mining With Uncertainty Modeling for Universal Domain Adaptation

no code implementations CVPR 2022 Liang Chen, Yihang Lou, Jianzhong He, Tao Bai, Minghua Deng

Therefore, in this paper, we propose a Geometric anchor-guided Adversarial and conTrastive learning framework with uncErtainty modeling called GATE to alleviate these issues.

Contrastive Learning Universal Domain Adaptation

Towards Efficiently Evaluating the Robustness of Deep Neural Networks in IoT Systems: A GAN-based Method

no code implementations19 Nov 2021 Tao Bai, Jun Zhao, Jinlin Zhu, Shoudong Han, Jiefeng Chen, Bo Li, Alex Kot

Through extensive experiments, AI-GAN achieves high attack success rates, outperforming existing methods, and reduces generation time significantly.

Adversarial Purification through Representation Disentanglement

no code implementations15 Oct 2021 Tao Bai, Jun Zhao, Lanqing Guo, Bihan Wen

Deep learning models are vulnerable to adversarial examples and make incomprehensible mistakes, which puts a threat on their real-world deployment.

Disentanglement

Neighborhood Consensus Contrastive Learning for Backward-Compatible Representation

no code implementations7 Aug 2021 Shengsen Wu, Liang Chen, Yihang Lou, Yan Bai, Tao Bai, Minghua Deng, Lingyu Duan

Therefore, backward-compatible representation is proposed to enable "new" features to be compared with "old" features directly, which means that the database is active when there are both "new" and "old" features in it.

Contrastive Learning

Inconspicuous Adversarial Patches for Fooling Image Recognition Systems on Mobile Devices

no code implementations29 Jun 2021 Tao Bai, Jinqi Luo, Jun Zhao

The patches are encouraged to be consistent with the background images with adversarial training while preserving strong attack abilities.

Recent Advances in Adversarial Training for Adversarial Robustness

no code implementations2 Feb 2021 Tao Bai, Jinqi Luo, Jun Zhao, Bihan Wen, Qian Wang

Adversarial training is one of the most effective approaches defending against adversarial examples for deep learning models.

Adversarial Robustness

Recent Advances in Understanding Adversarial Robustness of Deep Neural Networks

no code implementations3 Nov 2020 Tao Bai, Jinqi Luo, Jun Zhao

Adversarial examples are inevitable on the road of pervasive applications of deep neural networks (DNN).

Adversarial Robustness

Generating Adversarial yet Inconspicuous Patches with a Single Image

no code implementations21 Sep 2020 Jinqi Luo, Tao Bai, Jun Zhao

Through extensive experiments, our ap-proach shows strong attacking ability in both the white-box and black-box setting.

Feature Distillation With Guided Adversarial Contrastive Learning

no code implementations21 Sep 2020 Tao Bai, Jinnan Chen, Jun Zhao, Bihan Wen, Xudong Jiang, Alex Kot

In this paper, we propose a novel approach called Guided Adversarial Contrastive Distillation (GACD), to effectively transfer adversarial robustness from teacher to student with features.

Adversarial Robustness Contrastive Learning

AI-GAN: Attack-Inspired Generation of Adversarial Examples

1 code implementation6 Feb 2020 Tao Bai, Jun Zhao, Jinlin Zhu, Shoudong Han, Jiefeng Chen, Bo Li, Alex Kot

Deep neural networks (DNNs) are vulnerable to adversarial examples, which are crafted by adding imperceptible perturbations to inputs.

Reviewing and Improving the Gaussian Mechanism for Differential Privacy

no code implementations27 Nov 2019 Jun Zhao, Teng Wang, Tao Bai, Kwok-Yan Lam, Zhiying Xu, Shuyu Shi, Xuebin Ren, Xinyu Yang, Yang Liu, Han Yu

Although both classical Gaussian mechanisms [1, 2] assume $0 < \epsilon \leq 1$, our review finds that many studies in the literature have used the classical Gaussian mechanisms under values of $\epsilon$ and $\delta$ where the added noise amounts of [1, 2] do not achieve $(\epsilon,\delta)$-DP.

Cannot find the paper you are looking for? You can Submit a new open access paper.