Search Results for author: Sisi Duan

Found 1 papers, 1 papers with code

FigStep: Jailbreaking Large Vision-language Models via Typographic Visual Prompts

2 code implementations9 Nov 2023 Yichen Gong, Delong Ran, JinYuan Liu, Conglei Wang, Tianshuo Cong, Anyu Wang, Sisi Duan, XiaoYun Wang

Ensuring the safety of artificial intelligence-generated content (AIGC) is a longstanding topic in the artificial intelligence (AI) community, and the safety concerns associated with Large Language Models (LLMs) have been widely investigated.

Optical Character Recognition (OCR) Safety Alignment

Cannot find the paper you are looking for? You can Submit a new open access paper.