Search Results for author: Zhaofang Qian

Found 1 papers, 0 papers with code

Mitigating Dialogue Hallucination for Large Multi-modal Models via Adversarial Instruction Tuning

no code implementations15 Mar 2024 Dongmin Park, Zhaofang Qian, Guangxing Han, Ser-Nam Lim

To precisely measure this, we first present an evaluation benchmark by extending popular multi-modal benchmark datasets with prepended hallucinatory dialogues generated by our novel Adversarial Question Generator, which can automatically generate image-related yet adversarial dialogues by adopting adversarial attacks on LMMs.

Hallucination Instruction Following +1

Cannot find the paper you are looking for? You can Submit a new open access paper.