no code implementations • ECCV 2020 • Junfeng Guo, Cong Liu
Importantly, we show that the effectiveness of BlackCard can be intuitively guaranteed by a set of analytical reasoning and observations, through exploiting an essential characteristic of gradient-descent optimization which is pervasively adopted in DNN models.
no code implementations • 16 Jun 2025 • Ting Qiao, Yiming Li, Jianbin Li, Yingjia Wang, Leyi Qi, Junfeng Guo, Ruili Feng, DaCheng Tao
If the number of PP values smaller than WR exceeds a threshold, the suspicious model is regarded as having been trained on the protected dataset.
no code implementations • 26 May 2025 • Yan Wen, Junfeng Guo, Heng Huang
As large language models (LLMs) evolve into autonomous agents capable of collaborative reasoning and task execution, multi-agent LLM systems have emerged as a powerful paradigm for solving complex problems.
no code implementations • 20 May 2025 • Chenxi Liu, Tianyi Xiong, Ruibo Chen, Yihan Wu, Junfeng Guo, Tianyi Zhou, Heng Huang
Meanwhile, Group Relative Policy Optimization (GRPO), a recent method using online-generated data and verified rewards to improve reasoning capabilities, remains largely underexplored in LMM alignment.
no code implementations • 19 May 2025 • Yisheng Zhong, Yizhu Wen, Junfeng Guo, Mehran Kafai, Heng Huang, Hanqing Guo, Zhuangdi Zhu
The protection of cyber Intellectual Property (IP) such as web content is an increasingly critical concern.
no code implementations • 16 Feb 2025 • Tong Zheng, Yan Wen, Huiwen Bao, Junfeng Guo, Heng Huang
The emergence of Large Language Models (LLMs) has advanced the multilingual machine translation (MMT), yet the Curse of Multilinguality (CoM) remains a major challenge.
no code implementations • 16 Feb 2025 • Ruibo Chen, Yihan Wu, Junfeng Guo, Heng Huang
As artificial intelligence surpasses human capabilities in text generation, the necessity to authenticate the origins of AI-generated content has become paramount.
no code implementations • 10 Feb 2025 • Junfeng Guo, Yiming Li, Ruibo Chen, Yihan Wu, Chenxi Liu, Yanshuo Chen, Heng Huang
Large language models (LLMs) are increasingly integrated into real-world personalized applications through retrieval-augmented generation (RAG) mechanisms to supplement their responses with domain-specific knowledge.
1 code implementation • CVPR 2025 • Zilan Wang, Junfeng Guo, Jiacheng Zhu, Yiming Li, Heng Huang, Muhao Chen, Zhengzhong Tu
Recent advances in large-scale text-to-image (T2I) diffusion models have enabled a variety of downstream applications, including style customization, subject-driven personalization, and conditional generation.
no code implementations • 23 Oct 2024 • Dongliang Guo, Mengxuan Hu, Zihan Guan, Junfeng Guo, Thomas Hartvigsen, Sheng Li
Through empirical studies on the capability for performing backdoor attack in large pre-trained models ($\textit{e. g.,}$ ViT), we find the following unique challenges of attacking large pre-trained models: 1) the inability to manipulate or even access large training datasets, and 2) the substantial computational resources required for training or fine-tuning these models.
no code implementations • 17 Oct 2024 • Ruibo Chen, Yihan Wu, Junfeng Guo, Heng Huang
Watermarking techniques offer a promising way to identify machine-generated content via embedding covert information into the contents generated from language models (LMs).
no code implementations • 17 Oct 2024 • Ruibo Chen, Yihan Wu, Yanshuo Chen, Chenxi Liu, Junfeng Guo, Heng Huang
Correspondingly, we propose a statistical pattern-based detection algorithm that recovers the key sequence during detection and conducts statistical tests based on the count of high-frequency patterns.
no code implementations • 2 Jun 2024 • Yihan Wu, Ruibo Chen, Zhengmian Hu, Yanshuo Chen, Junfeng Guo, Hongyang Zhang, Heng Huang
Experimental results support that the beta-watermark can effectively reduce the distribution bias under key collisions.
1 code implementation • 14 Mar 2024 • Chenxi Liu, Zhenyi Wang, Tianyi Xiong, Ruibo Chen, Yihan Wu, Junfeng Guo, Heng Huang
Few-Shot Class-Incremental Learning (FSCIL) models aim to incrementally learn new classes with scarce samples while preserving knowledge of old ones.
class-incremental learning
Few-Shot Class-Incremental Learning
+1
1 code implementation • 19 Feb 2024 • Ruibo Chen, Yihan Wu, Lichang Chen, Guodong Liu, Qi He, Tianyi Xiong, Chenxi Liu, Junfeng Guo, Heng Huang
In the first stage, we devise a scoring network to evaluate the difficulty of training instructions, which is co-trained with the VLM.
no code implementations • 21 Dec 2023 • Lixu Wang, Chenxi Liu, Junfeng Guo, Jiahua Dong, Xiao Wang, Heng Huang, Qi Zhu
In a privacy-focused era, Federated Learning (FL) has emerged as a promising machine learning technique.
no code implementations • 3 Dec 2023 • Mingyan Zhu, Yiming Li, Junfeng Guo, Tao Wei, Shu-Tao Xia, Zhan Qin
We argue that the intensity constraint of existing SSBAs is mostly because their trigger patterns are `content-irrelevant' and therefore act as `noises' for both humans and DNNs.
2 code implementations • 11 Oct 2023 • Yihan Wu, Zhengmian Hu, Junfeng Guo, Hongyang Zhang, Heng Huang
Watermarking techniques offer a promising way to identify machine-generated content via embedding covert information into the contents generated from language models.
no code implementations • 13 Sep 2023 • Hanqing Guo, Xun Chen, Junfeng Guo, Li Xiao, Qiben Yan
In this work, we propose a backdoor attack MASTERKEY, to compromise the SV models.
no code implementations • ICCV 2023 • Junfeng Guo, Ang Li, Lixu Wang, Cong Liu
To ensure the security of RL agents against malicious backdoors, in this work, we propose the problem of Backdoor Detection in multi-agent RL systems, with the objective of detecting Trojan agents as well as the corresponding potential trigger actions, and further trying to mitigate their bad impact.
no code implementations • 8 Feb 2022 • Junfeng Guo, Ang Li, Cong Liu
To ensure the security of RL agents against malicious backdoors, in this work, we propose the problem of Backdoor Detection in a multi-agent competitive reinforcement learning system, with the objective of detecting Trojan agents as well as the corresponding potential trigger actions, and further trying to mitigate their Trojan behavior.
1 code implementation • ICLR 2022 • Junfeng Guo, Ang Li, Cong Liu
We approach this problem from the optimization perspective and show that the objective of backdoor detection is bounded by an adversarial objective.
1 code implementation • 7 May 2021 • Bangjie Yin, Wenxuan Wang, Taiping Yao, Junfeng Guo, Zelun Kong, Shouhong Ding, Jilin Li, Cong Liu
Deep neural networks, particularly face recognition models, have been shown to be vulnerable to both digital and physical adversarial examples.
no code implementations • CVPR 2022 • Xin Dong, Junfeng Guo, Ang Li, Wei-Te Ting, Cong Liu, H. T. Kung
Based upon this observation, we propose a novel metric called Neural Mean Discrepancy (NMD), which compares neural means of the input examples and training data.
no code implementations • 4 Feb 2021 • Junfeng Guo, Yaswanth Yadlapalli, Thiele Lothar, Ang Li, Cong Liu
PredCoin poisons the gradient estimation step, an essential component of most QBHL attacks.
no code implementations • 16 Oct 2020 • Sarah E. Gerard, Jacob Herrmann, Yi Xin, Kevin T. Martin, Emanuele Rezoagli, Davide Ippolito, Giacomo Bellani, Maurizio Cereda, Junfeng Guo, Eric A. Hoffman, David W. Kaczka, Joseph M. Reinhardt
Regional lobar analysis was performed using hierarchical clustering to identify radiographic subtypes of COVID-19.
no code implementations • 24 Mar 2020 • Junfeng Guo, Ting Wang, Cong Liu
Being able to detect and mitigate poisoning attacks, typically categorized into backdoor and adversarial poisoning (AP), is critical in enabling safe adoption of DNNs in many application domains.
no code implementations • CVPR 2020 • Zelun Kong, Junfeng Guo, Ang Li, Cong Liu
We compare PhysGAN with a set of state-of-the-art baseline methods including several of our self-designed ones, which further demonstrate the robustness and efficacy of our approach.