Search Results for author: Yubin Choi

Found 2 papers, 0 papers with code

RITUAL: Random Image Transformations as a Universal Anti-hallucination Lever in Large Vision Language Models

no code implementations28 May 2024 Sangmin Woo, Jaehyuk Jang, Donguk Kim, Yubin Choi, Changick Kim

By integrating the probability distributions from both the original and transformed images, RITUAL effectively reduces hallucinations.

Hallucination MME +1

Don't Miss the Forest for the Trees: Attentional Vision Calibration for Large Vision Language Models

no code implementations28 May 2024 Sangmin Woo, Donguk Kim, Jaehyuk Jang, Yubin Choi, Changick Kim

This study addresses the issue observed in Large Vision Language Models (LVLMs), where excessive attention on a few image tokens, referred to as blind tokens, leads to hallucinatory responses in tasks requiring fine-grained understanding of visual objects.

MME Object

Cannot find the paper you are looking for? You can Submit a new open access paper.