Search Results for author: Yunfeng Diao

Found 5 papers, 4 papers with code

Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples

1 code implementation16 May 2023 Wan Jiang, Yunfeng Diao, He Wang, Jianxin Sun, Meng Wang, Richang Hong

Unfortunately, we find UEs provide a false sense of security, because they cannot stop unauthorized users from utilizing other unprotected data to remove the protection, by turning unlearnable data into learnable again.

Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack

4 code implementations21 Nov 2022 Yunfeng Diao, He Wang, Tianjia Shao, Yong-Liang Yang, Kun Zhou, David Hogg

Via BASAR, we find on-manifold adversarial samples are extremely deceitful and rather common in skeletal motions, in contrast to the common belief that adversarial samples only exist off-manifold.

Adversarial Attack Human Activity Recognition +2

Defending Black-box Skeleton-based Human Activity Classifiers

2 code implementations9 Mar 2022 He Wang, Yunfeng Diao, Zichang Tan, Guodong Guo

Our method is featured by full Bayesian treatments of the clean data, the adversaries and the classifier, leading to (1) a new Bayesian Energy-based formulation of robust discriminative classifiers, (2) a new adversary sampling scheme based on natural motion manifolds, and (3) a new post-train Bayesian strategy for black-box defense.

Human Activity Recognition Time Series Analysis

BASAR:Black-box Attack on Skeletal Action Recognition

1 code implementation CVPR 2021 Yunfeng Diao, Tianjia Shao, Yong-Liang Yang, Kun Zhou, He Wang

The robustness of skeleton-based activity recognizers has been questioned recently, which shows that they are vulnerable to adversarial attacks when the full-knowledge of the recognizer is accessible to the attacker.

Action Recognition Adversarial Attack +1

Cannot find the paper you are looking for? You can Submit a new open access paper.