1 code implementation • 7 Mar 2024 • Yunhao Du, Zhicheng Zhao, Fei Su
To this end, we present the Refer-VI-ReID settings, which aims to match target visible images from both infrared images and coarse language descriptions (e. g., "a man with red top and black pants") to complement the missing color information.
1 code implementation • 25 Dec 2023 • Yunhao Du, Cheng Lei, Zhicheng Zhao, Fei Su
Referring multi-object tracking (RMOT) aims to track multiple objects based on input textual descriptions.
1 code implementation • 27 Nov 2023 • Yunhao Du, Cheng Lei, Zhicheng Zhao, Yuan Dong, Fei Su
Previous methods focus on learning from cross-modality person images in different cameras.
1 code implementation • 13 Mar 2023 • Ziqi He, Mengjia Xue, Yunhao Du, Zhicheng Zhao, Fei Su
To address this problem, we propose a dynamic clustering and cluster contrastive learning (DCCC) method.
1 code implementation • 11 Oct 2022 • Yunhao Du, Zihang Liu, Fei Su
Multiple Object Tracking (MOT) has rapidly progressed in recent years.
1 code implementation • 18 Apr 2022 • Yunhao Du, Binyu Zhang, Xiangning Ruan, Fei Su, Zhicheng Zhao, Hong Chen
For the textual representation, one global embedding, three local embeddings and a color-type prompt embedding are extracted to represent various granularities of semantic features.
no code implementations • 8 Mar 2022 • Yunhao Du, Zhihang Tong, Junfeng Wan, Binyu Zhang, Yanyun Zhao
In this work, we propose a comprehensive and effective activity detection system in untrimmed surveillance videos for person-centered and vehicle-centered activities.
14 code implementations • 28 Feb 2022 • Yunhao Du, Zhicheng Zhao, Yang song, Yanyun Zhao, Fei Su, Tao Gong, Hongying Meng
As a result, the construction of a good baseline for a fair comparison is essential.
Ranked #7 on Multi-Object Tracking on MOT17 (using extra training data)
1 code implementation • 24 Feb 2022 • Yunhao Du, Junfeng Wan, Yanyun Zhao, Binyu Zhang, Zhihang Tong, Junhao Dong
In recent years, algorithms for multiple object tracking tasks have benefited from great progresses in deep models and video quality.