no code implementations • 3 Jun 2025 • Jerrin Bright, Zhibo Wang, Yuhao Chen, Sirisha Rambhatla, John Zelek, David Clausi
Lack of input data for in-the-wild activities often results in low performance across various computer vision tasks.
no code implementations • 30 May 2025 • Yajie Zhou, Xiaoyi Pang, Zhibo Wang
Federated fine-tuning has emerged as a promising approach to adapt foundation models to downstream tasks using decentralized data.
no code implementations • 10 May 2025 • Xuran Li, Jingyi Wang, Xiaohan Yuan, Peixin Zhang, Zhan Qin, Zhibo Wang, Kui Ren
unlearn) a specific part of the training data from a trained neural network model.
no code implementations • 21 Mar 2025 • Zeqing He, Zhibo Wang, Huiyu Xu, Kui Ren
By leveraging sparse autoencoding, our approach isolates and adjusts only task-specific sparse feature dimensions, enabling precise and interpretable steering of model behavior while preserving content quality.
no code implementations • 9 Mar 2025 • Wenhui Zhang, Huiyu Xu, Zhibo Wang, Zeqing He, Ziqi Zhu, Kui Ren
Through systematically evaluation on 63 SLMs from 15 mainstream SLM families against 8 state-of-the-art jailbreak methods, we demonstrate that 47. 6% of evaluated SLMs show high susceptibility to jailbreak attacks (ASR > 40%) and 38. 1% of them can not even resist direct harmful query (ASR > 50%).
2 code implementations • 2 Dec 2024 • Zhixiang Wang, Guangnan Ye, Xiaosen Wang, Siheng Chen, Zhibo Wang, Xingjun Ma, Yu-Gang Jiang
However, most existing adversarial patch generation methods prioritize attack effectiveness over stealthiness, resulting in patches that are aesthetically unpleasing.
no code implementations • 2 Dec 2024 • Delong Zhu, Yuezun Li, Baoyuan Wu, Jiaran Zhou, Zhibo Wang, Siwei Lyu
The motivation stems from the reliance of most DeepFake methods on face detectors to automatically extract victim faces from videos for training or synthesis (testing).
1 code implementation • 10 Aug 2024 • Cheng Wei, Yang Wang, Kuofeng Gao, Shuo Shao, Yiming Li, Zhibo Wang, Zhan Qin
We achieve this goal by designing a scalable clean-label backdoor-based dataset watermark for point clouds that ensures both effectiveness and stealthiness.
no code implementations • 23 Jul 2024 • Huiyu Xu, Wenhui Zhang, Zhibo Wang, Feng Xiao, Rui Zheng, Yunhe Feng, Zhongjie Ba, Kui Ren
To enable context-aware and efficient red teaming, we abstract and model existing attacks into a coherent concept called "jailbreak strategy" and propose a multi-agent LLM system named RedAgent that leverages these strategies to generate context-aware jailbreak prompts.
no code implementations • 22 Jul 2024 • Juan Diego Toscano, Theo Käufer, Zhibo Wang, Martin Maxey, Christian Cierpka, George Em Karniadakis
This physics-informed machine learning method enables us to infer continuous temperature fields using only sparse velocity data, hence eliminating the need for direct temperature measurements.
Kolmogorov-Arnold Networks
Physics-informed machine learning
no code implementations • 22 Jun 2024 • Zhibo Wang, Zhiwei Chang, Jiahui Hu, Xiaoyi Pang, Jiacheng Du, Yongle Chen, Kui Ren
To preset the embeddings and logits of FCL, we craft a fishing model by solely modifying the parameters of a single batch normalization (BN) layer in the original model.
no code implementations • 19 Jun 2024 • Jiacheng Du, Zhibo Wang, Jie Zhang, Xiaoyi Pang, Jiahui Hu, Kui Ren
Language Models (LMs) are prone to ''memorizing'' training data, including substantial sensitive user information.
no code implementations • 24 May 2024 • Peng Kuang, Zhibo Wang, Zhixuan Chu, Jingyi Wang, Kui Ren
To tackle the problem, we propose a fine-grained framework for analyzing biased distributions, based on which we empirically and theoretically identify key characteristics of biased distributions in the real world that are poorly represented by existing benchmarks.
no code implementations • 7 May 2024 • Zhixuan Chu, Yan Wang, Longfei Li, Zhibo Wang, Zhan Qin, Kui Ren
Large Language Models (LLMs) have shown impressive performance in natural language tasks, but their outputs can exhibit undesirable attributes or biases.
1 code implementation • 7 May 2024 • Zhixuan Chu, Lei Zhang, Yichen Sun, Siqiao Xue, Zhibo Wang, Zhan Qin, Kui Ren
Leveraging the state-of-the-art keyframe extraction techniques and multimodal large language models, SoraDetector first evaluates the consistency between extracted video content summary and textual prompts, then constructs static and dynamic knowledge graphs (KGs) from frames to detect hallucination both in single frames and across frames.
no code implementations • 8 Apr 2024 • Jiacheng Du, Jiahui Hu, Zhibo Wang, Peng Sun, Neil Zhenqiang Gong, Kui Ren, Chun Chen
However, recent studies have shown that clients' private training data can be reconstructed from shared gradients in FL, a vulnerability known as gradient inversion attacks (GIAs).
no code implementations • 17 Jan 2024 • Jia Jia, Geunho Lee, Zhibo Wang, Lyu Zhi, Yuchu He
This network combines the Siam-U2Net Feature Differential Encoder (SU-FDE) and the denoising diffusion implicit model to improve the accuracy of image edge change detection and enhance the model's robustness under environmental changes.
no code implementations • CVPR 2024 • Peng Sun, Xinyang Liu, Zhibo Wang, Bo Liu
In this work we propose DFL-Dual a novel Byzantine-robust DFL method through dual-domain client clustering and trust bootstrapping.
no code implementations • 15 Oct 2023 • Yulong Yang, Chenhao Lin, Xiang Ji, Qiwei Tian, Qian Li, Hongshan Yang, Zhibo Wang, Chao Shen
Instead, a one-shot adversarial augmentation prior to training is sufficient, and we name this new defense paradigm Data-centric Robust Learning (DRL).
no code implementations • 25 Sep 2023 • Zhongjie Ba, Jieming Zhong, Jiachen Lei, Peng Cheng, Qinglong Wang, Zhan Qin, Zhibo Wang, Kui Ren
Evaluation results disclose an 88% success rate in bypassing Midjourney's proprietary safety filter with our attack prompts, leading to the generation of counterfeit images depicting political figures in violent scenarios.
1 code implementation • 20 Sep 2023 • Chao Shuai, Jieming Zhong, Shuang Wu, Feng Lin, Zhibo Wang, Zhongjie Ba, Zhenguang Liu, Lorenzo Cavallaro, Kui Ren
Deepfake has taken the world by storm, triggering a trust crisis.
1 code implementation • 18 Sep 2023 • Kun Pan, Yin Yifang, Yao Wei, Feng Lin, Zhongjie Ba, Zhenguang Liu, Zhibo Wang, Lorenzo Cavallaro, Kui Ren
However, the accuracy of detection models degrades significantly on images generated by new deepfake methods due to the difference in data distribution.
no code implementations • 3 Jul 2023 • Yudong Gao, Honglong Chen, Peng Sun, Junjian Li, Anqing Zhang, Zhibo Wang
Then, to attain strong stealthiness, we incorporate Fourier Transform and Discrete Cosine Transform to mix the poisoned image and clean image in the frequency domain.
1 code implementation • 20 Jun 2023 • Jiachen Lei, Qinglong Wang, Peng Cheng, Zhongjie Ba, Zhan Qin, Zhibo Wang, Zhenguang Liu, Kui Ren
In the pre-training stage, we propose to mask a high proportion (e. g., up to 90\%) of input images to approximately represent the primer distribution and introduce a masked denoising score matching objective to train a model to denoise visible areas.
no code implementations • 13 Jun 2023 • Yuheng Yang, Haipeng Chen, Zhenguang Liu, Yingda Lyu, Beibei Zhang, Shuang Wu, Zhibo Wang, Kui Ren
However, the vanilla Euclidean space is not efficient for modeling important motion characteristics such as the joint-wise angular acceleration, which reveals the driving force behind the motion.
Ranked #16 on
Skeleton Based Action Recognition
on NTU RGB+D 120
no code implementations • CVPR 2023 • Zhibo Wang, He Wang, Shuaifan Jin, Wenwen Zhang, Jiahui Hu, Yan Wang, Peng Sun, Wei Yuan, Kaixin Liu, Kui Ren
In this paper, we propose an adversarial features-based face privacy protection (AdvFace) approach to generate privacy-preserving adversarial features, which can disrupt the mapping from adversarial features to facial images to defend against reconstruction attacks.
1 code implementation • CVPR 2023 • Yuxuan Han, Zhibo Wang, Feng Xu
This paper proposes the first 3D morphable face reflectance model with spatially varying BRDF using only low-cost publicly-available data.
no code implementations • ICCV 2023 • Xue Wang, Zhibo Wang, Haiqin Weng, Hengchang Guo, Zhifei Zhang, Lu Jin, Tao Wei, Kui Ren
Considering the insufficient study on such complex causal questions, we make the first attempt to explain different causal questions by contrastive explanations in a unified framework, ie., Counterfactual Contrastive Explanation (CCE), which visually and intuitively explains the aforementioned questions via a novel positive-negative saliency-based explanation scheme.
1 code implementation • CVPR 2023 • Zhibo Wang, Hongshan Yang, Yunhe Feng, Peng Sun, Hengchang Guo, Zhifei Zhang, Kui Ren
In this paper, we propose the Transferable Targeted Adversarial Attack (TTAA), which can capture the distribution information of the target class from both label-wise and feature-wise perspectives, to generate highly transferable targeted adversarial examples.
no code implementations • ICCV 2023 • Lei Zhang, Zhibo Wang, Xiaowei Dong, Yunhe Feng, Xiaoyi Pang, Zhifei Zhang, Kui Ren
Network pruning aims to compress models while minimizing loss in accuracy.
1 code implementation • CVPR 2023 • Jingwang Ling, Zhibo Wang, Feng Xu
By supervising shadow rays, we successfully reconstruct a neural SDF of the scene from single-view images under multiple lighting conditions.
1 code implementation • 19 Jul 2022 • Jingwang Ling, Zhibo Wang, Ming Lu, Quan Wang, Chen Qian, Feng Xu
Previous works on morphable models mostly focus on large-scale facial geometry but ignore facial details.
no code implementations • 5 Jun 2022 • Guodong Cao, Zhibo Wang, Xiaowei Dong, Zhifei Zhang, Hengchang Guo, Zhan Qin, Kui Ren
However, most existing works are still trapped in the dilemma between higher accuracy and stronger robustness since they tend to fit a model towards robust features (not easily tampered with by adversaries) while ignoring those non-robust but highly predictive features.
1 code implementation • CVPR 2022 • Junfeng Lyu, Zhibo Wang, Feng Xu
In this paper, we propose a novel framework to remove eyeglasses as well as their cast shadows from face images.
no code implementations • CVPR 2022 • Zhibo Wang, Xiaowei Dong, Henry Xue, Zhifei Zhang, Weifeng Chiu, Tao Wei, Kui Ren
Prioritizing fairness is of central importance in artificial intelligence (AI) systems, especially for those societal applications, e. g., hiring systems should recommend applicants equally from different demographic groups, and risk assessment systems must eliminate racism in criminal justice.
no code implementations • 25 Feb 2022 • Feiliang Ren, Yongkang Liu, Bochao Li, Zhibo Wang, Yu Guo, Shilei Liu, Huimin Wu, Jiaqi Wang, Chunchao Liu, Bingchao Wang
Most existing multi-document machine reading comprehension models mainly focus on understanding the interactions between the input question and documents, but ignore following two kinds of understandings.
3 code implementations • ICCV 2021 • Zhibo Wang, Hengchang Guo, Zhifei Zhang, Wenxin Liu, Zhan Qin, Kui Ren
More specifically, we obtain feature importance by introducing the aggregate gradient, which averages the gradients with respect to feature maps of the source model, computed on a batch of random transforms of the original clean image.
no code implementations • 17 Nov 2019 • Zhibo Wang, Shen Yan, XiaoYu Zhang, Niels Lobo
(Very early draft)Traditional supervised learning keeps pushing convolution neural network(CNN) achieving state-of-art performance.
1 code implementation • ICCV 2019 • Zhibo Wang, Siyan Zheng, Mengkai Song, Qian Wang, Alireza Rahimpour, Hairong Qi
The results demonstrate that deep re-ID systems are vulnerable to our physical attacks.
no code implementations • 12 Feb 2019 • Wenqi Wang, Run Wang, Lina Wang, Zhibo Wang, Aoshuang Ye
Recently, studies have revealed adversarial examples in the text domain, which could effectively evade various DNN-based text analyzers and further bring the threats of the proliferation of disinformation.
1 code implementation • 3 Dec 2018 • Zhibo Wang, Mengkai Song, Zhifei Zhang, Yang song, Qian Wang, Hairong Qi
Although the state-of-the-art attacking techniques that incorporated the advance of Generative adversarial networks (GANs) could construct class representatives of the global data distribution among all clients, it is still challenging to distinguishably attack a specific client (i. e., user-level privacy leakage), which is a stronger privacy threat to precisely recover the private data from a specific client.