Search Results for author: Haojian Zhang

Found 7 papers, 2 papers with code

Weak Distribution Detectors Lead to Stronger Generalizability of Vision-Language Prompt Tuning

no code implementations31 Mar 2024 Kun Ding, Haojian Zhang, Qiang Yu, Ying Wang, Shiming Xiang, Chunhong Pan

The idea is realized by exploiting out-of-distribution (OOD) detection to predict whether a sample belongs to a base distribution or a novel distribution and then using the score generated by a dedicated competition based scoring function to fuse the zero-shot and few-shot classifier.

Out of Distribution (OOD) Detection

Compositional Kronecker Context Optimization for Vision-Language Models

no code implementations18 Mar 2024 Kun Ding, Xiaohui Li, Qiang Yu, Ying Wang, Haojian Zhang, Shiming Xiang

Context Optimization (CoOp) has emerged as a simple yet effective technique for adapting CLIP-like vision-language models to downstream image recognition tasks.

Zero-shot Generalizable Incremental Learning for Vision-Language Object Detection

no code implementations4 Mar 2024 Jieren Deng, Haojian Zhang, Kun Ding, Jianhua Hu, Xingxuan Zhang, Yunkuan Wang

This paper presents Incremental Vision-Language Object Detection (IVLOD), a novel learning task designed to incrementally adapt pre-trained Vision-Language Object Detection Models (VLODMs) to various specialized domains, while simultaneously preserving their zero-shot generalization capabilities for the generalized domain.

Incremental Learning object-detection +2

Prompt Tuning with Soft Context Sharing for Vision-Language Models

1 code implementation29 Aug 2022 Kun Ding, Ying Wang, Pengzhang Liu, Qiang Yu, Haojian Zhang, Shiming Xiang, Chunhong Pan

Inspired by the fact that modeling task relationship by multi-task learning can usually boost performance, we propose a novel method SoftCPT (Soft Context Sharing for Prompt Tuning) to tune pre-trained vision-language models on multiple target few-shot tasks jointly.

Few-Shot Learning Multi-Task Learning

Incremental Prototype Tuning for Class Incremental Learning

no code implementations7 Apr 2022 Jieren Deng, Jianhua Hu, Haojian Zhang, Yunkuan Wang

Class incremental learning(CIL) has attracted much attention, but most existing related works focus on fine-tuning the entire representation model, which inevitably results in much catastrophic forgetting.

Class Incremental Learning Incremental Learning

Semi-supervised Models are Strong Unsupervised Domain Adaptation Learners

no code implementations1 Jun 2021 Yabin Zhang, Haojian Zhang, Bin Deng, Shuai Li, Kui Jia, Lei Zhang

Especially, state-of-the-art SSL methods significantly outperform existing UDA methods on the challenging UDA benchmark of DomainNet, and state-of-the-art UDA methods could be further enhanced with SSL techniques.

Unsupervised Domain Adaptation

Unsupervised Domain Adaptation of Black-Box Source Models

1 code implementation8 Jan 2021 Haojian Zhang, Yabin Zhang, Kui Jia, Lei Zhang

Unsupervised domain adaptation (UDA) aims to learn models for a target domain of unlabeled data by transferring knowledge from a labeled source domain.

Learning with noisy labels Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.