Search Results for author: Yueqian Wang

Found 4 papers, 4 papers with code

HawkEye: Training Video-Text LLMs for Grounding Text in Videos

1 code implementation15 Mar 2024 Yueqian Wang, Xiaojun Meng, Jianxin Liang, Yuxuan Wang, Qun Liu, Dongyan Zhao

Video-text Large Language Models (video-text LLMs) have shown remarkable performance in answering questions and holding conversations on simple videos.

Video Grounding Video Question Answering

LSTP: Language-guided Spatial-Temporal Prompt Learning for Long-form Video-Text Understanding

1 code implementation25 Feb 2024 Yuxuan Wang, Yueqian Wang, Pengfei Wu, Jianxin Liang, Dongyan Zhao, Zilong Zheng

Despite progress in video-language modeling, the computational challenge of interpreting long-form videos in response to task-specific linguistic queries persists, largely due to the complexity of high-dimensional video data and the misalignment between language and visual cues over space and time.

Computational Efficiency Language Modelling +3

STAIR: Spatial-Temporal Reasoning with Auditable Intermediate Results for Video Question Answering

1 code implementation8 Jan 2024 Yueqian Wang, Yuxuan Wang, Kai Chen, Dongyan Zhao

However, most models can only handle simple videos in terms of temporal reasoning, and their performance tends to drop when answering temporal-reasoning questions on long and informative videos.

Question Answering Video Question Answering

VSTAR: A Video-grounded Dialogue Dataset for Situated Semantic Understanding with Scene and Topic Transitions

1 code implementation30 May 2023 Yuxuan Wang, Zilong Zheng, Xueliang Zhao, Jinpeng Li, Yueqian Wang, Dongyan Zhao

Video-grounded dialogue understanding is a challenging problem that requires machine to perceive, parse and reason over situated semantics extracted from weakly aligned video and dialogues.

Dialogue Generation Dialogue Understanding +2

Cannot find the paper you are looking for? You can Submit a new open access paper.