Search Results for author: Haomin Zhuang

Found 4 papers, 2 papers with code

Defending Jailbreak Prompts via In-Context Adversarial Game

no code implementations20 Feb 2024 Yujun Zhou, Yufei Han, Haomin Zhuang, Taicheng Guo, Kehan Guo, Zhenwen Liang, Hongyan Bao, Xiangliang Zhang

Large Language Models (LLMs) demonstrate remarkable capabilities across diverse applications.

Backdoor Federated Learning by Poisoning Backdoor-Critical Layers

no code implementations8 Aug 2023 Haomin Zhuang, Mingxian Yu, Hao Wang, Yang Hua, Jian Li, Xu Yuan

Federated learning (FL) has been widely deployed to enable machine learning training on sensitive data across distributed devices.

Backdoor Attack Federated Learning

A Comparison of Image Denoising Methods

1 code implementation18 Apr 2023 Zhaoming Kong, Fangxi Deng, Haomin Zhuang, Jun Yu, Lifang He, Xiaowei Yang

In this paper, to investigate the applicability of existing denoising techniques, we compare a variety of denoising methods on both synthetic and real-world datasets for different applications.

Benchmarking Image Denoising

A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion

1 code implementation29 Mar 2023 Haomin Zhuang, Yihua Zhang, Sijia Liu

In this work, we study the problem of adversarial attack generation for Stable Diffusion and ask if an adversarial text prompt can be obtained even in the absence of end-to-end model queries.

Adversarial Robustness Adversarial Text

Cannot find the paper you are looking for? You can Submit a new open access paper.