no code implementations • 24 Jan 2024 • Dezhao Luo, Shaogang Gong, Jiabo Huang, Hailin Jin, Yang Liu
We address two problems in video editing for optimising unseen domain VMR: (1) generation of high-quality simulation videos of different moments with subtle distinctions, (2) selection of simulation videos that complement existing source training videos without introducing harmful noise or unnecessary repetitions.
no code implementations • CVPR 2023 • Dezhao Luo, Jiabo Huang, Shaogang Gong, Hailin Jin, Yang Liu
The correlation between the vision and text is essential for video moment retrieval (VMR), however, existing methods heavily rely on separate pre-training feature extractors for visual and textual understanding.
no code implementations • 8 Jul 2021 • Wei Li, Dezhao Luo, Bo Fang, Yu Zhou, Weiping Wang
As a result, we can leverage the spatial information (the size of objects), temporal information (the direction and magnitude of motions) as our learning target.
no code implementations • 6 Aug 2020 • Dezhao Luo, Bo Fang, Yu Zhou, Yucan Zhou, Dayan Wu, Weiping Wang
Then a designed sampling strategy is used to model relations for video clips.
1 code implementation • 20 Jun 2020 • Yuan Yao, Chang Liu, Dezhao Luo, Yu Zhou, Qixiang Ye
The generative perception model acts as a feature decoder to focus on comprehending high temporal resolution and short-term representation by introducing a motion-attention mechanism.
1 code implementation • 2 Jan 2020 • Dezhao Luo, Chang Liu, Yu Zhou, Dongbao Yang, Can Ma, Qixiang Ye, Weiping Wang
As a proxy task, it converts rich self-supervised representations into video clip operations (options), which enhances the flexibility and reduces the complexity of representation learning.
Ranked #11 on Self-supervised Video Retrieval on HMDB51