1 code implementation • 10 Apr 2025 • Junyi Ma, Wentao Bao, Jingyi Xu, Guanzhong Sun, Xieyuanli Chen, Hesheng Wang
In addition, these models overlook the synergy between hand movements and headset camera egomotion, either predicting hand trajectories in isolation or encoding egomotion only from past frames.
1 code implementation • 5 Apr 2025 • YiFan Li, Wentao Bao, Botao Ye, Zhen Tan, Tianlong Chen, Huan Liu, Yu Kong
To further enhance the performance on fine-grained visual understanding tasks, we introduce WiCo+, which decomposes the visual tokens in later layers of the LLM.
1 code implementation • 6 Jan 2025 • YiFan Li, Zhixin Lai, Wentao Bao, Zhen Tan, Anh Dao, Kewei Sui, Jiayi Shen, Dong Liu, Huan Liu, Yu Kong
Visual-language models (VLM) have emerged as a powerful tool for learning a unified embedding space for vision and language.
1 code implementation • 17 Nov 2024 • Wentao Bao, Kai Li, Yuxiao Chen, Deep Patel, Martin Renqiang Min, Yu Kong
Existing approaches focus on the closed-set setting where an action detector is trained and tested on videos from a fixed set of action categories.
no code implementations • 22 Sep 2024 • Yuxiao Chen, Kai Li, Wentao Bao, Deep Patel, Yu Kong, Martin Renqiang Min, Dimitris N. Metaxas
Learning to localize temporal boundaries of procedure steps in instructional videos is challenging due to the limited availability of annotated large-scale training videos.
no code implementations • 4 Sep 2024 • Junyi Ma, Xieyuanli Chen, Wentao Bao, Jingyi Xu, Hesheng Wang
Understanding human intentions and actions through egocentric videos is important on the path to embodied artificial intelligence.
1 code implementation • 7 Apr 2024 • YiFan Li, Anh Dao, Wentao Bao, Zhen Tan, Tianlong Chen, Huan Liu, Yu Kong
Our initiative on the dataset and benchmarks reveal the nature and rationale of facial affective behaviors, i. e., fine-grained facial movement, interpretability, and reasoning.
no code implementations • 19 Sep 2023 • Wentao Bao, Qi Yu, Yu Kong
A recent trend in OSR shows the benefit of generative models to discriminative unknown detection.
no code implementations • 18 Sep 2023 • Xinmiao Lin, Wentao Bao, Qi Yu, Yu Kong
Neural pathways as model explanations consist of a sparse set of neurons that provide the same level of prediction performance as the whole model.
1 code implementation • ICCV 2023 • Wentao Bao, Lele Chen, Libing Zeng, Zhong Li, Yi Xu, Junsong Yuan, Yu Kong
In this paper, we set up an egocentric 3D hand trajectory forecasting task that aims to predict hand trajectories in a 3D space from early observed RGB videos in a first-person view.
1 code implementation • 23 May 2023 • Wentao Bao, Lichang Chen, Heng Huang, Yu Kong
Compositional zero-shot learning (CZSL) task aims to recognize unseen compositional visual concepts, e. g., sliced tomatoes, where the model is learned only from the seen compositions, e. g., sliced potatoes and red tomatoes.
no code implementations • CVPR 2023 • Libing Zeng, Lele Chen, Wentao Bao, Zhong Li, Yi Xu, Junsong Yuan, Nima Khademi Kalantari
Accurate facial landmark detection on wild images plays an essential role in human-computer interaction, entertainment, and medical applications.
no code implementations • 23 Aug 2022 • Yuansheng Zhu, Wentao Bao, Qi Yu
We develop a novel weakly supervised method for the OpenVAD problem by integrating evidential deep learning (EDL) and normalizing flows (NFs) into a multiple instance learning (MIL) framework.
1 code implementation • CVPR 2022 • Wentao Bao, Qi Yu, Yu Kong
The OpenTAL is general to enable existing TAL models for open set scenarios, and experimental results on THUMOS14 and ActivityNet1. 3 benchmarks show the effectiveness of our method.
no code implementations • 1 Nov 2021 • Xinmiao Lin, Wentao Bao, Matthew Wright, Yu Kong
In many applications, it is essential to understand why a machine learning model makes the decisions it does, but this is inhibited by the black-box nature of state-of-the-art neural networks.
2 code implementations • ICCV 2021 • Wentao Bao, Qi Yu, Yu Kong
Different from image data, video actions are more challenging to be recognized in an open-set setting due to the uncertain temporal dynamics and static bias of human actions.
1 code implementation • ICCV 2021 • Wentao Bao, Qi Yu, Yu Kong
Traffic accident anticipation aims to accurately and promptly predict the occurrence of a future accident from dashcam videos, which is vital for a safety-guaranteed self-driving system.
1 code implementation • ECCV 2020 • Junwen Chen, Wentao Bao, Yu Kong
Our model explicitly anticipates both activity features and positions by two graph auto-encoders, aiming to learn a discriminative group representation for group activity prediction.
2 code implementations • 1 Aug 2020 • Wentao Bao, Qi Yu, Yu Kong
The derived uncertainty-based ranking loss is found to significantly boost model performance by improving the quality of relational features.
Ranked #2 on
Accident Anticipation
on CCD
no code implementations • 20 Jul 2020 • Wentao Bao, Qi Yu, Yu Kong
Monocular 3D object detection aims to detect objects in a 3D physical world from a single camera.