1 code implementation • 13 May 2025 • Ziyuan He, Zhiqing Guo, Liejun Wang, Gaobo Yang, Yunfeng Diao, Dan Ma
Deepfake technology poses increasing risks such as privacy invasion and identity theft.
no code implementations • 5 Feb 2025 • Xiaoshuai Hao, Yunfeng Diao, Mengchuan Wei, Yifan Yang, Peng Hao, Rong Yin, HUI ZHANG, Weiming Li, Shu Zhao, Yu Liu
To address these issues, we propose MapFusion, a novel multi-modal Bird's-Eye View (BEV) feature fusion method for map construction.
1 code implementation • 21 Dec 2024 • Jingjing Hu, Dan Guo, Zhan Si, Deguang Liu, Yunfeng Diao, Jing Zhang, Jinxing Zhou, Meng Wang
Molecular representation learning plays a crucial role in various downstream tasks, such as molecular property prediction and drug design.
no code implementations • 10 Dec 2024 • Wan Jiang, He Wang, Xin Zhang, Dan Guo, Zhaoxin Fan, Yunfeng Diao, Richang Hong
To fill this gap, we first examine the current 'gold standard' in Machine Unlearning (MU), i. e., re-training the model after removing the undesirable training data, and find it does not work in SGMs.
1 code implementation • 4 Sep 2024 • Yunfeng Diao, Baiqi Wu, Ruixuan Zhang, Ajian Liu, Xingxing Wei, Meng Wang, He Wang
The transferability of adversarial skeletal sequences enables attacks in real-world HAR scenarios, such as autonomous driving, intelligent surveillance, and human-computer interactions.
no code implementations • 30 Jul 2024 • Yunfeng Diao, Naixin Zhai, Changtao Miao, Zitong Yu, Xingxing Wei, Xun Yang, Meng Wang
To address such concerns, numerous AI-generated Image (AIGI) Detectors have been proposed and achieved promising performance in identifying fake images.
no code implementations • 11 Jul 2024 • Yunfeng Diao, Baiqi Wu, Ruixuan Zhang, Xun Yang, Meng Wang, He Wang
However, the research of adversarial transferability on S-HAR is largely missing.
no code implementations • 29 Jun 2023 • He Wang, Yunfeng Diao
To this end, we propose a new post-train black-box defense framework.
1 code implementation • 16 May 2023 • Wan Jiang, Yunfeng Diao, He Wang, Jianxin Sun, Meng Wang, Richang Hong
Unfortunately, we find UEs provide a false sense of security, because they cannot stop unauthorized users from utilizing other unprotected data to remove the protection, by turning unlearnable data into learnable again.
4 code implementations • 21 Nov 2022 • Yunfeng Diao, He Wang, Tianjia Shao, Yong-Liang Yang, Kun Zhou, David Hogg, Meng Wang
Via BASAR, we find on-manifold adversarial samples are extremely deceitful and rather common in skeletal motions, in contrast to the common belief that adversarial samples only exist off-manifold.
2 code implementations • 9 Mar 2022 • He Wang, Yunfeng Diao, Zichang Tan, Guodong Guo
Our method is featured by full Bayesian treatments of the clean data, the adversaries and the classifier, leading to (1) a new Bayesian Energy-based formulation of robust discriminative classifiers, (2) a new adversary sampling scheme based on natural motion manifolds, and (3) a new post-train Bayesian strategy for black-box defense.
1 code implementation • CVPR 2021 • Yunfeng Diao, Tianjia Shao, Yong-Liang Yang, Kun Zhou, He Wang
The robustness of skeleton-based activity recognizers has been questioned recently, which shows that they are vulnerable to adversarial attacks when the full-knowledge of the recognizer is accessible to the attacker.