1 code implementation • 3 May 2024 • Deng Li, Bohao Xing, Xin Liu
In addition, instead of using handcrafted prompts for visual-text contrastive learning, we propose a novel module called Adaptive prompting to generate context-aware prompts.
no code implementations • 1 May 2024 • Deng Li, Xin Liu, Bohao Xing, Baiqiang Xia, Yuan Zong, Bihan Wen, Heikki Kälviäinen
In contrast, long sequential videos can reveal authentic emotions; 2) Previous studies commonly utilize various signals such as facial, speech, and even sensitive biological signals (e. g., electrocardiogram).
no code implementations • 28 Feb 2024 • Deng Li, Aming Wu, YaoWei Wang, Yahong Han
In this paper, we propose a dynamic object-centric perception network based on prompt learning, aiming to adapt to the variations in image complexity.
1 code implementation • 26 Dec 2022 • Deng Li, Aming Wu, Yahong Han, Qi Tian
Considering the complexity and variability of real scene tasks, we propose a Prototype-guided Cross-task Knowledge Distillation (ProC-KD) approach to transfer the intrinsic local-level object knowledge of a large-scale teacher network to various task scenarios.
no code implementations • 11 Mar 2022 • YaoWei Wang, Zhouxin Yang, Rui Liu, Deng Li, Yuandu Lai, Leyuan Fang, Yahong Han
Considering the diversity and complexity of scenes in intelligent city governance, we build a large-scale object detection benchmark for the smart city.
1 code implementation • 24 May 2021 • Deng Li, Yue Wu, Yicong Zhou
In this paper, we propose a novel Line Counting formulation for HTLS -- that involves counting the number of text lines from the top at every pixel location.
1 code implementation • 12 May 2021 • Deng Li, Yue Wu, Yicong Zhou
The AST module further consolidates the outputs from MWS and PWA and predicts the final adaptive threshold for each pixel location.