Honeybee: Locality-enhanced Projector for Multimodal LLM

11 Dec 2023  ยท  Junbum Cha, Wooyoung Kang, Jonghwan Mun, Byungseok Roh ยท

In Multimodal Large Language Models (MLLMs), a visual projector plays a crucial role in bridging pre-trained vision encoders with LLMs, enabling profound visual understanding while harnessing the LLMs' robust capabilities. Despite the importance of the visual projector, it has been relatively less explored. In this study, we first identify two essential projector properties: (i) flexibility in managing the number of visual tokens, crucial for MLLMs' overall efficiency, and (ii) preservation of local context from visual features, vital for spatial understanding. Based on these findings, we propose a novel projector design that is both flexible and locality-enhanced, effectively satisfying the two desirable properties. Additionally, we present comprehensive strategies to effectively utilize multiple and multifaceted instruction datasets. Through extensive experiments, we examine the impact of individual design choices. Finally, our proposed MLLM, Honeybee, remarkably outperforms previous state-of-the-art methods across various benchmarks, including MME, MMBench, SEED-Bench, and LLaVA-Bench, achieving significantly higher efficiency. Code and models are available at https://github.com/kakaobrain/honeybee.

PDF Abstract

Results from the Paper


 Ranked #1 on Science Question Answering on ScienceQA (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Science Question Answering ScienceQA Honeybee Natural Science 95.20 # 2
Social Science 96.29 # 1
Language Science 91.18 # 1
Text Context 94.48 # 2
Image Context 93.75 # 1
No Context 93.17 # 1
Grades 1-6 95.04 # 1
Grades 7-12 93.21 # 1
Avg. Accuracy 94.39 # 1

Methods


No methods listed for this paper. Add relevant methods here