Search Results for author: Zhijian Hou

Found 6 papers, 4 papers with code

GroundNLQ @ Ego4D Natural Language Queries Challenge 2023

1 code implementation27 Jun 2023 Zhijian Hou, Lei Ji, Difei Gao, Wanjun Zhong, Kun Yan, Chao Li, Wing-Kwong Chan, Chong-Wah Ngo, Nan Duan, Mike Zheng Shou

Motivated by this, we leverage a two-stage pre-training strategy to train egocentric feature extractors and the grounding model on video narrations, and further fine-tune the model on annotated data.

Natural Language Queries

CONE: An Efficient COarse-to-fiNE Alignment Framework for Long Video Temporal Grounding

1 code implementation22 Sep 2022 Zhijian Hou, Wanjun Zhong, Lei Ji, Difei Gao, Kun Yan, Wing-Kwong Chan, Chong-Wah Ngo, Zheng Shou, Nan Duan

This paper tackles an emerging and challenging problem of long video temporal grounding~(VTG) that localizes video moments related to a natural language (NL) query.

Contrastive Learning Video Grounding

(Un)likelihood Training for Interpretable Embedding

1 code implementation1 Jul 2022 Jiaxin Wu, Chong-Wah Ngo, Wing-Kwong Chan, Zhijian Hou

Cross-modal representation learning has become a new normal for bridging the semantic gap between text and visual data.

Ad-hoc video search Representation Learning +2

CONQUER: Contextual Query-aware Ranking for Video Corpus Moment Retrieval

1 code implementation21 Sep 2021 Zhijian Hou, Chong-Wah Ngo, Wing Kwong Chan

This task is essential because advanced video retrieval applications should enable users to retrieve a precise moment from a large video corpus.

Corpus Video Moment Retrieval Moment Retrieval +6

vireoJD-MM at Activity Detection in Extended Videos

no code implementations20 Jun 2019 Fuchen Long, Qi Cai, Zhaofan Qiu, Zhijian Hou, Yingwei Pan, Ting Yao, Chong-Wah Ngo

This notebook paper presents an overview and comparative analysis of our system designed for activity detection in extended videos (ActEV-PC) in ActivityNet Challenge 2019.

Action Detection Action Localization +1

Cannot find the paper you are looking for? You can Submit a new open access paper.