Search Results for author: Kaibin Tian

Found 6 papers, 2 papers with code

Towards Efficient and Effective Text-to-Video Retrieval with Coarse-to-Fine Visual Representation Learning

no code implementations1 Jan 2024 Kaibin Tian, Yanhua Cheng, Yi Liu, Xinglin Hou, Quan Chen, Han Li

To address this issue, we adopt multi-granularity visual feature learning, ensuring the model's comprehensiveness in capturing visual content features spanning from abstract to detailed levels during the training phase.

Representation Learning Retrieval +3

TeachCLIP: Multi-Grained Teaching for Efficient Text-to-Video Retrieval

no code implementations2 Aug 2023 Kaibin Tian, Ruixiang Zhao, Hu Hu, Runquan Xie, Fengzong Lian, Zhanhui Kang, Xirong Li

For efficient T2VR, we propose TeachCLIP with multi-grained teaching to let a CLIP4Clip based student network learn from more advanced yet computationally heavy models such as X-CLIP, TS2-Net and X-Pool .

Retrieval text similarity +2

MVP: Robust Multi-View Practice for Driving Action Localization

no code implementations5 Jul 2022 Jingjie Shang, Kunchang Li, Kaibin Tian, Haisheng Su, Yangguang Li

Due to the small data scale and unclear action boundary, the dataset presents a unique challenge to precisely localize all the different actions and classify their categories.

Action Localization

Co-Teaching for Unsupervised Domain Adaptation and Expansion

1 code implementation4 Apr 2022 Kaibin Tian, Qijie Wei, Xirong Li

Such sorts of samples are typically in minority in their host domain, so they tend to be overlooked by the domain-specific model, but can be better handled by a model from the other domain.

Image Classification Knowledge Distillation +3

Unsupervised Domain Expansion for Visual Categorization

2 code implementations1 Apr 2021 Jie Wang, Kaibin Tian, Dayong Ding, Gang Yang, Xirong Li

In this paper we extend UDA by proposing a new task called unsupervised domain expansion (UDE), which aims to adapt a deep model for the target domain with its unlabeled data, meanwhile maintaining the model's performance on the source domain.

Knowledge Distillation Unsupervised Domain Adaptation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.