Search Results for author: Hongcheng Gao

Found 9 papers, 6 papers with code

Universal Prompt Optimizer for Safe Text-to-Image Generation

no code implementations16 Feb 2024 Zongyu Wu, Hongcheng Gao, Yueze Wang, Xiang Zhang, Suhang Wang

Text-to-Image (T2I) models have shown great performance in generating images based on textual prompts.

Blocking Text-to-Image Generation

Generative Pretraining in Multimodality

2 code implementations11 Jul 2023 Quan Sun, Qiying Yu, Yufeng Cui, Fan Zhang, Xiaosong Zhang, Yueze Wang, Hongcheng Gao, Jingjing Liu, Tiejun Huang, Xinlong Wang

We present Emu, a Transformer-based multimodal foundation model, which can seamlessly generate images and texts in multimodal context.

Image Captioning Temporal/Casual QA +4

Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks

no code implementations16 Jun 2023 Hongcheng Gao, Hao Zhang, Yinpeng Dong, Zhijie Deng

Text-to-image (T2I) diffusion models (DMs) have shown promise in generating high-quality images from textual descriptions.

From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework

1 code implementation29 May 2023 Yangyi Chen, Hongcheng Gao, Ganqu Cui, Lifan Yuan, Dehan Kong, Hanlu Wu, Ning Shi, Bo Yuan, Longtao Huang, Hui Xue, Zhiyuan Liu, Maosong Sun, Heng Ji

In our experiments, we conduct a robustness evaluation of RoBERTa models to demonstrate the effectiveness of our evaluation framework, and further show the rationality of each component in the framework.

Adversarial Attack

Efficient Detection of LLM-generated Texts with a Bayesian Surrogate Model

no code implementations26 May 2023 Zhijie Deng, Hongcheng Gao, Yibo Miao, Hao Zhang

The detection of machine-generated text, especially from large language models (LLMs), is crucial in preventing serious social problems resulting from their misuse.

Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP

1 code implementation19 Oct 2022 Yangyi Chen, Hongcheng Gao, Ganqu Cui, Fanchao Qi, Longtao Huang, Zhiyuan Liu, Maosong Sun

We discuss the deficiencies in previous work and propose our suggestions that the research on the Security-oriented adversarial NLP (SoadNLP) should: (1) evaluate their methods on security tasks to demonstrate the real-world concerns; (2) consider real-world attackers' goals, instead of developing impractical methods.

Data Augmentation

Exploring the Universal Vulnerability of Prompt-based Learning Paradigm

1 code implementation Findings (NAACL) 2022 Lei Xu, Yangyi Chen, Ganqu Cui, Hongcheng Gao, Zhiyuan Liu

Prompt-based learning paradigm bridges the gap between pre-training and fine-tuning, and works effectively under the few-shot setting.

Textual Backdoor Attacks Can Be More Harmful via Two Simple Tricks

1 code implementation15 Oct 2021 Yangyi Chen, Fanchao Qi, Hongcheng Gao, Zhiyuan Liu, Maosong Sun

In this paper, we find two simple tricks that can make existing textual backdoor attacks much more harmful.

Vocal Bursts Valence Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.