Search Results for author: Yuxin Cao

Found 11 papers, 7 papers with code

Mitigating Unauthorized Speech Synthesis for Voice Protection

1 code implementation28 Oct 2024 Zhisheng Zhang, Qianyi Yang, Derui Wang, Pengyang Huang, Yuxin Cao, Kai Ye, Jie Hao

With just a few speech samples, it is possible to perfectly replicate a speaker's voice in recent years, while malicious voice exploitation (e. g., telecom fraud for illegal financial gain) has brought huge hazards in our daily lives.

Data Augmentation Face Swapping +3

Query-Efficient Video Adversarial Attack with Stylized Logo

no code implementations22 Aug 2024 Duoxun Tang, Yuxin Cao, Xi Xiao, Derui Wang, Sheng Wen, Tianqing Zhu

Therefore, to generate adversarial examples with a low budget and to provide them with a higher verisimilitude, we propose a novel black-box video attack framework, called Stylized Logo Attack (SLA).

Adversarial Attack Reinforcement Learning (RL) +2

Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition Systems

2 code implementations11 Jul 2024 Yuxin Cao, Yumeng Zhu, Derui Wang, Sheng Wen, Minhui Xue, Jin Lu, Hao Ge

In contrast to widely studied sophisticated attacks in the field, we propose an effective yet easy-to-launch physical adversarial attack, named AdvColor, against black-box face recognition pipelines in the physical world.

Adversarial Attack Face Recognition

Effects of Exponential Gaussian Distribution on (Double Sampling) Randomized Smoothing

1 code implementation4 Jun 2024 Youwei Shu, Xi Xiao, Derui Wang, Yuxin Cao, Siji Chen, Jason Xue, Linyi Li, Bo Li

Randomized Smoothing (RS) is currently a scalable certified defense method providing robustness certification against adversarial examples.

Unbridled Icarus: A Survey of the Potential Perils of Image Inputs in Multimodal Large Language Model Security

no code implementations8 Apr 2024 Yihe Fan, Yuxin Cao, Ziyu Zhao, Ziyao Liu, Shaofeng Li

Multimodal Large Language Models (MLLMs) demonstrate remarkable capabilities that increasingly influence various aspects of our daily lives, constantly defining the new boundary of Artificial General Intelligence (AGI).

Language Modeling Language Modelling +2

LocalStyleFool: Regional Video Style Transfer Attack Using Segment Anything Model

no code implementations18 Mar 2024 Yuxin Cao, Jinghao Li, Xi Xiao, Derui Wang, Minhui Xue, Hao Ge, Wei Liu, Guangwu Hu

Benefiting from the popularity and scalably usability of Segment Anything Model (SAM), we first extract different regions according to semantic information and then track them through the video stream to maintain the temporal consistency.

Adversarial Attack Style Transfer +2

3D Face Reconstruction Using A Spectral-Based Graph Convolution Encoder

1 code implementation8 Mar 2024 Haoxin Xu, Zezheng Zhao, Yuxin Cao, Chunyu Chen, Hao Ge, Ziyao Liu

To overcome this limitation and enhance the reconstruction of 3D structural features, we propose an innovative approach that integrates existing 2D features with 3D features to guide the model learning process.

3D Face Reconstruction

LogoStyleFool: Vitiating Video Recognition Systems via Logo Style Transfer

1 code implementation15 Dec 2023 Yuxin Cao, Ziyu Zhao, Xi Xiao, Derui Wang, Minhui Xue, Jin Lu

We separate the attack into three stages: style reference selection, reinforcement-learning-based logo style transfer, and perturbation optimization.

reinforcement-learning Reinforcement Learning +2

StyleFool: Fooling Video Classification Systems via Style Transfer

1 code implementation30 Mar 2022 Yuxin Cao, Xi Xiao, Ruoxi Sun, Derui Wang, Minhui Xue, Sheng Wen

In this paper, we focus on unrestricted perturbations and propose StyleFool, a black-box video adversarial attack via style transfer to fool the video classification system.

Adversarial Attack Classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.