no code implementations • 1 Jan 2024 • Kaibin Tian, Yanhua Cheng, Yi Liu, Xinglin Hou, Quan Chen, Han Li
To address this issue, we adopt multi-granularity visual feature learning, ensuring the model's comprehensiveness in capturing visual content features spanning from abstract to detailed levels during the training phase.
no code implementations • 2 Aug 2023 • Kaibin Tian, Ruixiang Zhao, Hu Hu, Runquan Xie, Fengzong Lian, Zhanhui Kang, Xirong Li
For efficient T2VR, we propose TeachCLIP with multi-grained teaching to let a CLIP4Clip based student network learn from more advanced yet computationally heavy models such as X-CLIP, TS2-Net and X-Pool .
no code implementations • 28 Nov 2022 • Xirong Li, Aozhu Chen, Ziyue Wang, Fan Hu, Kaibin Tian, Xinru Chen, Chengbo Dong
The 2022 edition of the TRECVID benchmark has again been a fruitful participation for the RUCMM team.
no code implementations • 5 Jul 2022 • Jingjie Shang, Kunchang Li, Kaibin Tian, Haisheng Su, Yangguang Li
Due to the small data scale and unclear action boundary, the dataset presents a unique challenge to precisely localize all the different actions and classify their categories.
1 code implementation • 4 Apr 2022 • Kaibin Tian, Qijie Wei, Xirong Li
Such sorts of samples are typically in minority in their host domain, so they tend to be overlooked by the domain-specific model, but can be better handled by a model from the other domain.
2 code implementations • 1 Apr 2021 • Jie Wang, Kaibin Tian, Dayong Ding, Gang Yang, Xirong Li
In this paper we extend UDA by proposing a new task called unsupervised domain expansion (UDE), which aims to adapt a deep model for the target domain with its unlabeled data, meanwhile maintaining the model's performance on the source domain.
Ranked #1 on Unsupervised Domain Expansion on UDE-DomainNet