CAT-Seg: Cost Aggregation for Open-Vocabulary Semantic Segmentation

21 Mar 2023  ยท  Seokju Cho, Heeseong Shin, Sunghwan Hong, Anurag Arnab, Paul Hongsuck Seo, Seungryong Kim ยท

Open-vocabulary semantic segmentation presents the challenge of labeling each pixel within an image based on a wide range of text descriptions. In this work, we introduce a novel cost-based approach to adapt vision-language foundation models, notably CLIP, for the intricate task of semantic segmentation. Through aggregating the cosine similarity score, i.e., the cost volume between image and text embeddings, our method potently adapts CLIP for segmenting seen and unseen classes by fine-tuning its encoders, addressing the challenges faced by existing methods in handling unseen classes. Building upon this, we explore methods to effectively aggregate the cost volume considering its multi-modal nature of being established between image and text embeddings. Furthermore, we examine various methods for efficiently fine-tuning CLIP.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Open Vocabulary Semantic Segmentation ADE20K-150 CAT-Seg mIoU 37.9 # 1
Open Vocabulary Semantic Segmentation ADE20K-847 CAT-Seg mIoU 16.0 # 1
Open Vocabulary Semantic Segmentation PASCAL Context-459 CAT-Seg mIoU 23.8 # 2
Open Vocabulary Semantic Segmentation PASCAL Context-59 CAT-Seg mIoU 63.3 # 2
Open Vocabulary Semantic Segmentation PascalVOC-20 CAT-Seg mIoU 97.0 # 3
Open Vocabulary Semantic Segmentation PascalVOC-20b CAT-Seg mIoU 82.5 # 1

Methods