Search Results for author: Chenchen Jing

Found 5 papers, 5 papers with code

CoLeCLIP: Open-Domain Continual Learning via Joint Task Prompt and Vocabulary Learning

1 code implementation15 Mar 2024 Yukun Li, Guansong Pang, Wei Suo, Chenchen Jing, Yuling Xi, Lingqiao Liu, Hao Chen, Guoqiang Liang, Peng Wang

Large pre-trained VLMs like CLIP have demonstrated superior zero-shot recognition ability, and a number of recent studies leverage this ability to mitigate catastrophic forgetting in CL, but they focus on closed-set CL in a single domain dataset.

Class Incremental Learning Incremental Learning +1

SegPrompt: Boosting Open-world Segmentation via Category-level Prompt Learning

1 code implementation ICCV 2023 Muzhi Zhu, Hengtao Li, Hao Chen, Chengxiang Fan, Weian Mao, Chenchen Jing, Yifan Liu, Chunhua Shen

In this work, we propose a novel training mechanism termed SegPrompt that uses category information to improve the model's class-agnostic segmentation ability for both known and unknown categories.

Open-World Instance Segmentation Segmentation +1

Learning Conditional Attributes for Compositional Zero-Shot Learning

1 code implementation CVPR 2023 Qingsheng Wang, Lingqiao Liu, Chenchen Jing, Hao Chen, Guoqiang Liang, Peng Wang, Chunhua Shen

Compositional Zero-Shot Learning (CZSL) aims to train models to recognize novel compositional concepts based on learned concepts such as attribute-object combinations.

Attribute Compositional Zero-Shot Learning

Exploring the Effect of Primitives for Compositional Generalization in Vision-and-Language

1 code implementation CVPR 2023 Chuanhao Li, Zhen Li, Chenchen Jing, Yunde Jia, Yuwei Wu

Compositional generalization is critical to simulate the compositional capability of humans, and has received much attention in the vision-and-language (V&L) community.

Question Answering Self-Supervised Learning +2

Maintaining Reasoning Consistency in Compositional Visual Question Answering

1 code implementation CVPR 2022 Chenchen Jing, Yunde Jia, Yuwei Wu, Xinyu Liu, Qi Wu

Existing VQA models can answer a compositional question well, but cannot work well in terms of reasoning consistency in answering the compositional question and its sub-questions.

Question Answering Visual Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.