Search Results for author: Qingbin Liu

Found 11 papers, 5 papers with code

TRACE: Temporal Grounding Video LLM via Causal Event Modeling

1 code implementation8 Oct 2024 Yongxin Guo, Jingyu Liu, Mingda Li, Xiaoying Tang, Qingbin Liu, Xi Chen

To effectively handle various tasks simultaneously and enable zero-shot prediction, there is a growing trend in employing video LLMs for VTG tasks.

Text Generation Video Understanding

Enhancing Long Video Understanding via Hierarchical Event-Based Memory

no code implementations10 Sep 2024 Dingxin Cheng, Mingda Li, Jingyu Liu, Yongxin Guo, Bin Jiang, Qingbin Liu, Xi Chen, Bo Zhao

While this method excels in short video understanding, it may result in a blend of multiple event information in long videos due to coarse compression, which causes information redundancy.

Video Understanding

To Forget or Not? Towards Practical Knowledge Unlearning for Large Language Models

1 code implementation2 Jul 2024 Bozhong Tian, Xiaozhuan Liang, Siyuan Cheng, Qingbin Liu, Mengru Wang, Dianbo Sui, Xi Chen, Huajun Chen, Ningyu Zhang

Large Language Models (LLMs) trained on extensive corpora inevitably retain sensitive data, such as personal privacy information and copyrighted material.

General Knowledge

VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding

1 code implementation22 May 2024 Yongxin Guo, Jingyu Liu, Mingda Li, Dingxin Cheng, Xiaoying Tang, Dianbo Sui, Qingbin Liu, Xi Chen, Kevin Zhao

Video Temporal Grounding (VTG) strives to accurately pinpoint event timestamps in a specific video using linguistic queries, significantly impacting downstream tasks like video browsing and editing.

Dense Video Captioning Highlight Detection +2

Knowledge-augmented Few-shot Visual Relation Detection

no code implementations9 Mar 2023 Tianyu Yu, Yangning Li, Jiaoyan Chen, Yinghui Li, Hai-Tao Zheng, Xi Chen, Qingbin Liu, Wenqiang Liu, Dongxiao Huang, Bei Wu, Yexin Wang

Inspired by this, we devise a knowledge-augmented, few-shot VRD framework leveraging both textual knowledge and visual relation knowledge to improve the generalization ability of few-shot VRD.

Diversity Few-Shot Learning +3

Lifelong Intent Detection via Multi-Strategy Rebalancing

no code implementations10 Aug 2021 Qingbin Liu, Xiaoyan Yu, Shizhu He, Kang Liu, Jun Zhao

In this paper, we propose Lifelong Intent Detection (LID), which continually trains an ID model on new data to learn newly emerging intents while avoiding catastrophically forgetting old data.

Intent Detection Knowledge Distillation +1

Copy-Enhanced Heterogeneous Information Learning for Dialogue State Tracking

no code implementations21 Aug 2019 Qingbin Liu, Shizhu He, Kang Liu, Shengping Liu, Jun Zhao

How to integrate the semantic information of pre-defined ontology and dialogue text (heterogeneous texts) to generate unknown values and improve performance becomes a severe challenge.

Decoder Dialogue State Tracking +1

Cannot find the paper you are looking for? You can Submit a new open access paper.