Search Results for author: Yunfeng Diao

Found 12 papers, 7 papers with code

MapFusion: A Novel BEV Feature Fusion Network for Multi-modal Map Construction

no code implementations5 Feb 2025 Xiaoshuai Hao, Yunfeng Diao, Mengchuan Wei, Yifan Yang, Peng Hao, Rong Yin, HUI ZHANG, Weiming Li, Shu Zhao, Yu Liu

To address these issues, we propose MapFusion, a novel multi-modal Bird's-Eye View (BEV) feature fusion method for map construction.

Autonomous Driving

MOL-Mamba: Enhancing Molecular Representation with Structural & Electronic Insights

1 code implementation21 Dec 2024 Jingjing Hu, Dan Guo, Zhan Si, Deguang Liu, Yunfeng Diao, Jing Zhang, Jinxing Zhou, Meng Wang

Molecular representation learning plays a crucial role in various downstream tasks, such as molecular property prediction and drug design.

Drug Design Mamba +4

Moderating the Generalization of Score-based Generative Model

no code implementations10 Dec 2024 Wan Jiang, He Wang, Xin Zhang, Dan Guo, Zhaoxin Fan, Yunfeng Diao, Richang Hong

To fill this gap, we first examine the current 'gold standard' in Machine Unlearning (MU), i. e., re-training the model after removing the undesirable training data, and find it does not work in SGMs.

Image Inpainting Machine Unlearning +1

TASAR: Transfer-based Attack on Skeletal Action Recognition

1 code implementation4 Sep 2024 Yunfeng Diao, Baiqi Wu, Ruixuan Zhang, Ajian Liu, Xingxing Wei, Meng Wang, He Wang

The transferability of adversarial skeletal sequences enables attacks in real-world HAR scenarios, such as autonomous driving, intelligent surveillance, and human-computer interactions.

Action Recognition Autonomous Driving +2

Vulnerabilities in AI-generated Image Detection: The Challenge of Adversarial Attacks

no code implementations30 Jul 2024 Yunfeng Diao, Naixin Zhai, Changtao Miao, Zitong Yu, Xingxing Wei, Xun Yang, Meng Wang

To address such concerns, numerous AI-generated Image (AIGI) Detectors have been proposed and achieved promising performance in identifying fake images.

Adversarial Attack Adversarial Robustness +1

Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples

1 code implementation16 May 2023 Wan Jiang, Yunfeng Diao, He Wang, Jianxin Sun, Meng Wang, Richang Hong

Unfortunately, we find UEs provide a false sense of security, because they cannot stop unauthorized users from utilizing other unprotected data to remove the protection, by turning unlearnable data into learnable again.

Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack

4 code implementations21 Nov 2022 Yunfeng Diao, He Wang, Tianjia Shao, Yong-Liang Yang, Kun Zhou, David Hogg, Meng Wang

Via BASAR, we find on-manifold adversarial samples are extremely deceitful and rather common in skeletal motions, in contrast to the common belief that adversarial samples only exist off-manifold.

Adversarial Attack Human Activity Recognition +2

Defending Black-box Skeleton-based Human Activity Classifiers

2 code implementations9 Mar 2022 He Wang, Yunfeng Diao, Zichang Tan, Guodong Guo

Our method is featured by full Bayesian treatments of the clean data, the adversaries and the classifier, leading to (1) a new Bayesian Energy-based formulation of robust discriminative classifiers, (2) a new adversary sampling scheme based on natural motion manifolds, and (3) a new post-train Bayesian strategy for black-box defense.

Human Activity Recognition Time Series Analysis

BASAR:Black-box Attack on Skeletal Action Recognition

1 code implementation CVPR 2021 Yunfeng Diao, Tianjia Shao, Yong-Liang Yang, Kun Zhou, He Wang

The robustness of skeleton-based activity recognizers has been questioned recently, which shows that they are vulnerable to adversarial attacks when the full-knowledge of the recognizer is accessible to the attacker.

Action Recognition Adversarial Attack +1

Cannot find the paper you are looking for? You can Submit a new open access paper.