Search Results for author: Zhibo Wang

Found 24 papers, 12 papers with code

Siamese Meets Diffusion Network: SMDNet for Enhanced Change Detection in High-Resolution RS Imagery

no code implementations17 Jan 2024 Jia Jia, Geunho Lee, Zhibo Wang, Lyu Zhi, Yuchu He

This network combines the Siam-U2Net Feature Differential Encoder (SU-FDE) and the denoising diffusion implicit model to improve the accuracy of image edge change detection and enhance the model's robustness under environmental changes.

Change Detection Denoising

Towards Deep Learning Models Resistant to Transfer-based Adversarial Attacks via Data-centric Robust Learning

no code implementations15 Oct 2023 Yulong Yang, Chenhao Lin, Xiang Ji, Qiwei Tian, Qian Li, Hongshan Yang, Zhibo Wang, Chao Shen

Instead, a one-shot adversarial augmentation prior to training is sufficient, and we name this new defense paradigm Data-centric Robust Learning (DRL).

Fairness

SurrogatePrompt: Bypassing the Safety Filter of Text-To-Image Models via Substitution

1 code implementation25 Sep 2023 Zhongjie Ba, Jieming Zhong, Jiachen Lei, Peng Cheng, Qinglong Wang, Zhan Qin, Zhibo Wang, Kui Ren

Evaluation results disclose an 88% success rate in bypassing Midjourney's proprietary safety filter with our attack prompts, leading to the generation of counterfeit images depicting political figures in violent scenarios.

DFIL: Deepfake Incremental Learning by Exploiting Domain-invariant Forgery Clues

1 code implementation18 Sep 2023 Kun Pan, Yin Yifang, Yao Wei, Feng Lin, Zhongjie Ba, Zhenguang Liu, Zhibo Wang, Lorenzo Cavallaro, Kui Ren

However, the accuracy of detection models degrades significantly on images generated by new deepfake methods due to the difference in data distribution.

Continual Learning Contrastive Learning +5

A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives

no code implementations3 Jul 2023 Yudong Gao, Honglong Chen, Peng Sun, Junjian Li, Anqing Zhang, Zhibo Wang

Then, to attain strong stealthiness, we incorporate Fourier Transform and Discrete Cosine Transform to mix the poisoned image and clean image in the frequency domain.

Backdoor Attack

Masked Diffusion Models Are Fast Distribution Learners

1 code implementation20 Jun 2023 Jiachen Lei, Qinglong Wang, Peng Cheng, Zhongjie Ba, Zhan Qin, Zhibo Wang, Zhenguang Liu, Kui Ren

In the pre-training stage, we propose to mask a high proportion (e. g., up to 90\%) of input images to approximately represent the primer distribution and introduce a masked denoising score matching objective to train a model to denoise visible areas.

Denoising Image Generation

Action Recognition with Multi-stream Motion Modeling and Mutual Information Maximization

no code implementations13 Jun 2023 Yuheng Yang, Haipeng Chen, Zhenguang Liu, Yingda Lyu, Beibei Zhang, Shuang Wu, Zhibo Wang, Kui Ren

However, the vanilla Euclidean space is not efficient for modeling important motion characteristics such as the joint-wise angular acceleration, which reveals the driving force behind the motion.

Action Recognition

Privacy-preserving Adversarial Facial Features

no code implementations CVPR 2023 Zhibo Wang, He Wang, Shuaifan Jin, Wenwen Zhang, Jiahui Hu, Yan Wang, Peng Sun, Wei Yuan, Kaixin Liu, Kui Ren

In this paper, we propose an adversarial features-based face privacy protection (AdvFace) approach to generate privacy-preserving adversarial features, which can disrupt the mapping from adversarial features to facial images to defend against reconstruction attacks.

Face Recognition Privacy Preserving

Learning a 3D Morphable Face Reflectance Model from Low-cost Data

1 code implementation CVPR 2023 Yuxuan Han, Zhibo Wang, Feng Xu

This paper proposes the first 3D morphable face reflectance model with spatially varying BRDF using only low-cost publicly-available data.

Face Model Inverse Rendering

Towards Transferable Targeted Adversarial Examples

1 code implementation CVPR 2023 Zhibo Wang, Hongshan Yang, Yunhe Feng, Peng Sun, Hengchang Guo, Zhifei Zhang, Kui Ren

In this paper, we propose the Transferable Targeted Adversarial Attack (TTAA), which can capture the distribution information of the target class from both label-wise and feature-wise perspectives, to generate highly transferable targeted adversarial examples.

Adversarial Attack

Counterfactual-based Saliency Map: Towards Visual Contrastive Explanations for Neural Networks

no code implementations ICCV 2023 Xue Wang, Zhibo Wang, Haiqin Weng, Hengchang Guo, Zhifei Zhang, Lu Jin, Tao Wei, Kui Ren

Considering the insufficient study on such complex causal questions, we make the first attempt to explain different causal questions by contrastive explanations in a unified framework, ie., Counterfactual Contrastive Explanation (CCE), which visually and intuitively explains the aforementioned questions via a novel positive-negative saliency-based explanation scheme.

counterfactual

ShadowNeuS: Neural SDF Reconstruction by Shadow Ray Supervision

1 code implementation CVPR 2023 Jingwang Ling, Zhibo Wang, Feng Xu

By supervising shadow rays, we successfully reconstruct a neural SDF of the scene from single-view images under multiple lighting conditions.

Novel View Synthesis

Structure-aware Editable Morphable Model for 3D Facial Detail Animation and Manipulation

1 code implementation19 Jul 2022 Jingwang Ling, Zhibo Wang, Ming Lu, Quan Wang, Chen Qian, Feng Xu

Previous works on morphable models mostly focus on large-scale facial geometry but ignore facial details.

Vanilla Feature Distillation for Improving the Accuracy-Robustness Trade-Off in Adversarial Training

no code implementations5 Jun 2022 Guodong Cao, Zhibo Wang, Xiaowei Dong, Zhifei Zhang, Hengchang Guo, Zhan Qin, Kui Ren

However, most existing works are still trapped in the dilemma between higher accuracy and stronger robustness since they tend to fit a model towards robust features (not easily tampered with by adversaries) while ignoring those non-robust but highly predictive features.

Knowledge Distillation

Portrait Eyeglasses and Shadow Removal by Leveraging 3D Synthetic Data

1 code implementation CVPR 2022 Junfeng Lyu, Zhibo Wang, Feng Xu

In this paper, we propose a novel framework to remove eyeglasses as well as their cast shadows from face images.

Face Verification Shadow Removal

Fairness-aware Adversarial Perturbation Towards Bias Mitigation for Deployed Deep Models

no code implementations CVPR 2022 Zhibo Wang, Xiaowei Dong, Henry Xue, Zhifei Zhang, Weifeng Chiu, Tao Wei, Kui Ren

Prioritizing fairness is of central importance in artificial intelligence (AI) systems, especially for those societal applications, e. g., hiring systems should recommend applicants equally from different demographic groups, and risk assessment systems must eliminate racism in criminal justice.

Fairness

Deep Understanding based Multi-Document Machine Reading Comprehension

no code implementations25 Feb 2022 Feiliang Ren, Yongkang Liu, Bochao Li, Zhibo Wang, Yu Guo, Shilei Liu, Huimin Wu, Jiaqi Wang, Chunchao Liu, Bingchao Wang

Most existing multi-document machine reading comprehension models mainly focus on understanding the interactions between the input question and documents, but ignore following two kinds of understandings.

Machine Reading Comprehension TriviaQA

Feature Importance-aware Transferable Adversarial Attacks

3 code implementations ICCV 2021 Zhibo Wang, Hengchang Guo, Zhifei Zhang, Wenxin Liu, Zhan Qin, Kui Ren

More specifically, we obtain feature importance by introducing the aggregate gradient, which averages the gradients with respect to feature maps of the source model, computed on a batch of random transforms of the original clean image.

Feature Importance

Unsupervised Visual Representation Learning with Increasing Object Shape Bias

no code implementations17 Nov 2019 Zhibo Wang, Shen Yan, XiaoYu Zhang, Niels Lobo

(Very early draft)Traditional supervised learning keeps pushing convolution neural network(CNN) achieving state-of-art performance.

Object Representation Learning

Towards a Robust Deep Neural Network in Texts: A Survey

no code implementations12 Feb 2019 Wenqi Wang, Run Wang, Lina Wang, Zhibo Wang, Aoshuang Ye

Recently, studies have revealed adversarial examples in the text domain, which could effectively evade various DNN-based text analyzers and further bring the threats of the proliferation of disinformation.

General Classification Image Classification +2

Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning

1 code implementation3 Dec 2018 Zhibo Wang, Mengkai Song, Zhifei Zhang, Yang song, Qian Wang, Hairong Qi

Although the state-of-the-art attacking techniques that incorporated the advance of Generative adversarial networks (GANs) could construct class representatives of the global data distribution among all clients, it is still challenging to distinguishably attack a specific client (i. e., user-level privacy leakage), which is a stronger privacy threat to precisely recover the private data from a specific client.

Edge-computing Federated Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.