Search Results for author: Zhaoyu Chen

Found 20 papers, 8 papers with code

Exploring Decision-based Black-box Attacks on Face Forgery Detection

no code implementations18 Oct 2023 Zhaoyu Chen, Bo Li, Kaixun Jiang, Shuang Wu, Shouhong Ding, Wenqiang Zhang

Further, the fake faces by our method can pass face forgery detection and face recognition, which exposes the security problems of face forgery detectors.

Face Recognition

Improving Generalization in Visual Reinforcement Learning via Conflict-aware Gradient Agreement Augmentation

no code implementations ICCV 2023 Siao Liu, Zhaoyu Chen, Yang Liu, Yuzheng Wang, Dingkang Yang, Zhile Zhao, Ziqing Zhou, Xie Yi, Wei Li, Wenqiang Zhang, Zhongxue Gan

In particular, CG2A develops a Gradient Agreement Solver to adaptively balance the varying gradient magnitudes, and introduces a Soft Gradient Surgery strategy to alleviate the gradient conflicts.


Sampling to Distill: Knowledge Transfer from Open-World Data

no code implementations31 Jul 2023 Yuzheng Wang, Zhaoyu Chen, Jie Zhang, Dingkang Yang, Zuhao Ge, Yang Liu, Siao Liu, Yunquan Sun, Wenqiang Zhang, Lizhe Qi

Then, we introduce a low-noise representation to alleviate the domain shifts and build a structured relationship of multiple data examples to exploit data knowledge.

Knowledge Distillation Transfer Learning

OpenVIS: Open-vocabulary Video Instance Segmentation

no code implementations26 May 2023 Pinxue Guo, Tony Huang, Peiyang He, Xuefeng Liu, Tianjun Xiao, Zhaoyu Chen, Wenqiang Zhang

We propose and study a new computer vision task named open-vocabulary video instance segmentation (OpenVIS), which aims to simultaneously segment, detect, and track arbitrary objects in a video according to corresponding text descriptions.

Instance Segmentation Semantic Segmentation +1

Non-rigid Point Cloud Registration for Middle Ear Diagnostics with Endoscopic Optical Coherence Tomography

1 code implementation26 Apr 2023 Peng Liu, Jonas Golde, Joseph Morgenstern, Sebastian Bodenstedt, Chenpan Li, Yujia Hu, Zhaoyu Chen, Edmund Koch, Marcus Neudert, Stefanie Speidel

To overcome the lack of labeled training data, a fast and effective generation pipeline in Blender3D is designed to simulate middle ear shapes and extract in-vivo noisy and partial point clouds.

Point Cloud Registration

Context De-confounded Emotion Recognition

1 code implementation CVPR 2023 Dingkang Yang, Zhaoyu Chen, Yuzheng Wang, Shunli Wang, Mingcheng Li, Siao Liu, Xiao Zhao, Shuai Huang, Zhiyan Dong, Peng Zhai, Lihua Zhang

However, a long-overlooked issue is that a context bias in existing datasets leads to a significantly unbalanced distribution of emotional states among different context scenarios.

Emotion Recognition

Efficient Decision-based Black-box Patch Attacks on Video Recognition

no code implementations ICCV 2023 Kaixun Jiang, Zhaoyu Chen, Hao Huang, Jiafeng Wang, Dingkang Yang, Bo Li, Yan Wang, Wenqiang Zhang

First, STDE introduces target videos as patch textures and only adds patches on keyframes that are adaptively selected by temporal difference.

Video Recognition

Explicit and Implicit Knowledge Distillation via Unlabeled Data

no code implementations17 Feb 2023 Yuzheng Wang, Zuhao Ge, Zhaoyu Chen, Xian Liu, Chuangjia Ma, Yunquan Sun, Lizhe Qi

Data-free knowledge distillation is a challenging model lightweight task for scenarios in which the original dataset is not available.

Knowledge Distillation

Adversarial Contrastive Distillation with Adaptive Denoising

no code implementations17 Feb 2023 Yuzheng Wang, Zhaoyu Chen, Dingkang Yang, Yang Liu, Siao Liu, Wenqiang Zhang, Lizhe Qi

To this end, we propose a novel structured ARD method called Contrastive Relationship DeNoise Distillation (CRDND).

Adversarial Robustness Denoising +1

Boosting the Transferability of Adversarial Attacks with Global Momentum Initialization

1 code implementation21 Nov 2022 Jiafeng Wang, Zhaoyu Chen, Kaixun Jiang, Dingkang Yang, Lingyi Hong, Pinxue Guo, Haijing Guo, Wenqiang Zhang

To tackle these issues, we propose Global Momentum Initialization (GI) to suppress gradient elimination and help search for the global optimum.

Shape Matters: Deformable Patch Attack

1 code implementation European Conference on Computer Vision 2022 Zhaoyu Chen, Bo Li, Shuang Wu, Jianghe Xu, Shouhong Ding, Wenqiang Zhang

Though deep neural networks (DNNs) have demonstrated excellent performance in computer vision, they are susceptible and vulnerable to carefully crafted adversarial examples which can mislead DNNs to incorrect outputs.

Towards Practical Certifiable Patch Defense with Vision Transformer

no code implementations CVPR 2022 Zhaoyu Chen, Bo Li, Jianghe Xu, Shuang Wu, Shouhong Ding, Wenqiang Zhang

To move towards a practical certifiable patch defense, we introduce Vision Transformer (ViT) into the framework of Derandomized Smoothing (DS).

Efficient universal shuffle attack for visual object tracking

no code implementations14 Mar 2022 Siao Liu, Zhaoyu Chen, Wei Li, Jiwei Zhu, Jiafeng Wang, Wenqiang Zhang, Zhongxue Gan

Recently, adversarial attacks have been applied in visual object tracking to deceive deep trackers by injecting imperceptible perturbations into video frames.

Adversarial Attack Visual Object Tracking

CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes

1 code implementation23 May 2021 Hao Huang, Yongtao Wang, Zhaoyu Chen, Yuze Zhang, Yuheng Li, Zhi Tang, Wei Chu, Jingdong Chen, Weisi Lin, Kai-Kuang Ma

Then, we design a two-level perturbation fusion strategy to alleviate the conflict between the adversarial watermarks generated by different facial images and models.

Adversarial Attack Face Swapping +1

RPATTACK: Refined Patch Attack on General Object Detectors

1 code implementation23 Mar 2021 Hao Huang, Yongtao Wang, Zhaoyu Chen, Zhi Tang, Wenqiang Zhang, Kai-Kuang Ma

Firstly, we propose a patch selection and refining scheme to find the pixels which have the greatest importance for attack and remove the inconsequential perturbations gradually.

Cannot find the paper you are looking for? You can Submit a new open access paper.