Search Results for author: Junfeng Guo

Found 15 papers, 3 papers with code

Practical Poisoning Attacks on Neural Networks

no code implementations ECCV 2020 Junfeng Guo, Cong Liu

Importantly, we show that the effectiveness of BlackCard can be intuitively guaranteed by a set of analytical reasoning and observations, through exploiting an essential characteristic of gradient-descent optimization which is pervasively adopted in DNN models.

Data Poisoning

Few-Shot Class Incremental Learning with Attention-Aware Self-Adaptive Prompt

no code implementations14 Mar 2024 Chenxi Liu, Zhenyi Wang, Tianyi Xiong, Ruibo Chen, Yihan Wu, Junfeng Guo, Heng Huang

Few-Shot Class-Incremental Learning (FSCIL) models aim to incrementally learn new classes with scarce samples while preserving knowledge of old ones.

Few-Shot Class-Incremental Learning Incremental Learning

Federated Continual Novel Class Learning

no code implementations21 Dec 2023 Lixu Wang, Chenxi Liu, Junfeng Guo, Jiahua Dong, Xiao Wang, Heng Huang, Qi Zhu

In a privacy-focused era, Federated Learning (FL) has emerged as a promising machine learning technique.

Federated Learning Novel Class Discovery +1

Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger

no code implementations3 Dec 2023 Yiming Li, Mingyan Zhu, Junfeng Guo, Tao Wei, Shu-Tao Xia, Zhan Qin

We argue that the intensity constraint of existing SSBAs is mostly because their trigger patterns are `content-irrelevant' and therefore act as `noises' for both humans and DNNs.

Attribute Backdoor Attack

PolicyCleanse: Backdoor Detection and Mitigation for Competitive Reinforcement Learning

no code implementations ICCV 2023 Junfeng Guo, Ang Li, Lixu Wang, Cong Liu

To ensure the security of RL agents against malicious backdoors, in this work, we propose the problem of Backdoor Detection in multi-agent RL systems, with the objective of detecting Trojan agents as well as the corresponding potential trigger actions, and further trying to mitigate their bad impact.

Machine Unlearning reinforcement-learning +1

PolicyCleanse: Backdoor Detection and Mitigation in Reinforcement Learning

no code implementations8 Feb 2022 Junfeng Guo, Ang Li, Cong Liu

To ensure the security of RL agents against malicious backdoors, in this work, we propose the problem of Backdoor Detection in a multi-agent competitive reinforcement learning system, with the objective of detecting Trojan agents as well as the corresponding potential trigger actions, and further trying to mitigate their Trojan behavior.

Machine Unlearning reinforcement-learning +1

AEVA: Black-box Backdoor Detection Using Adversarial Extreme Value Analysis

1 code implementation ICLR 2022 Junfeng Guo, Ang Li, Cong Liu

We approach this problem from the optimization perspective and show that the objective of backdoor detection is bounded by an adversarial objective.

Adv-Makeup: A New Imperceptible and Transferable Attack on Face Recognition

1 code implementation7 May 2021 Bangjie Yin, Wenxuan Wang, Taiping Yao, Junfeng Guo, Zelun Kong, Shouhong Ding, Jilin Li, Cong Liu

Deep neural networks, particularly face recognition models, have been shown to be vulnerable to both digital and physical adversarial examples.

Adversarial Attack Face Generation +2

Neural Mean Discrepancy for Efficient Out-of-Distribution Detection

no code implementations CVPR 2022 Xin Dong, Junfeng Guo, Ang Li, Wei-Te Ting, Cong Liu, H. T. Kung

Based upon this observation, we propose a novel metric called Neural Mean Discrepancy (NMD), which compares neural means of the input examples and training data.

General Classification Out-of-Distribution Detection +1

PredCoin: Defense against Query-based Hard-label Attack

no code implementations4 Feb 2021 Junfeng Guo, Yaswanth Yadlapalli, Thiele Lothar, Ang Li, Cong Liu

PredCoin poisons the gradient estimation step, an essential component of most QBHL attacks.

Hard-label Attack

PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural Networks

no code implementations24 Mar 2020 Junfeng Guo, Ting Wang, Cong Liu

Being able to detect and mitigate poisoning attacks, typically categorized into backdoor and adversarial poisoning (AP), is critical in enabling safe adoption of DNNs in many application domains.

Data Poisoning

PhysGAN: Generating Physical-World-Resilient Adversarial Examples for Autonomous Driving

no code implementations CVPR 2020 Zelun Kong, Junfeng Guo, Ang Li, Cong Liu

We compare PhysGAN with a set of state-of-the-art baseline methods including several of our self-designed ones, which further demonstrate the robustness and efficacy of our approach.

Autonomous Driving Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.