1 code implementation • 27 Dec 2024 • Xiaoyang Liu, Boran Wen, Xinpeng Liu, Zizheng Zhou, Hongwei Fan, Cewu Lu, Lizhuang Ma, Yulong Chen, Yong-Lu Li
Accordingly, an object grounding task is proposed expecting vision systems to discover interacted objects.
no code implementations • 9 Dec 2024 • Xinpeng Liu, Junxuan Liang, Chenshuo Zhang, Zixuan Cai, Cewu Lu, Yong-Lu Li
We identify a major obstacle to this as the heterogeneity of existing human motion understanding efforts.
no code implementations • 6 Dec 2024 • Zehao Wang, Xinpeng Liu, Xiaoqian Wu, Yudonglin Zhang, Zhou Fang, Yifan Fang, Junfu Pu, Cewu Lu, Yong-Lu Li
In this paper, to the best of our knowledge, we are the $\textbf{first}$ to investigate the $\textbf{verb hallucination}$ phenomenon of MLLMs from various perspectives.
1 code implementation • 25 Nov 2024 • Xinpeng Liu, Hiroaki Santo, Yosuke Toda, Fumio Okura
While recent graph generation methods successfully infer thin structures from images, it is challenging to constrain the output graph strictly to a tree structure.
no code implementations • 23 Oct 2024 • Xinpeng Liu, Junxuan Liang, Zili Lin, Haowen Hou, Yong-Lu Li, Cewu Lu
In light of this, we devise an efficient data collection pipeline with state-of-the-art motion imitation algorithms and physics simulators, resulting in a large-scale human inverse dynamics benchmark as Imitated Dynamics (ImDy).
no code implementations • 17 Dec 2023 • SiQi Liu, Yong-Lu Li, Zhou Fang, Xinpeng Liu, Yang You, Cewu Lu
To explore an effective embedding of HAOI for the machine, we build a new benchmark on 3D HAOI consisting of primitives together with their images and propose a task requiring machines to recover 3D HAOI using primitives from images.
no code implementations • 5 Dec 2023 • Xinpeng Liu, Haowen Hou, Yanchao Yang, Yong-Lu Li, Cewu Lu
High-quality data with simultaneously captured human and 3D environments is hard to acquire, resulting in limited data diversity and complexity.
no code implementations • 6 Oct 2023 • Xinpeng Liu, Yong-Lu Li, Ailing Zeng, Zizheng Zhou, Yang You, Cewu Lu
Motion understanding aims to establish a reliable mapping between motion and action semantics, while it is a challenging many-to-many problem.
no code implementations • CVPR 2024 • Yong-Lu Li, Xiaoqian Wu, Xinpeng Liu, Zehao Wang, Yiming Dou, Yikun Ji, Junyi Zhang, Yixing Li, Jingru Tan, Xudong Lu, Cewu Lu
By aligning the classes of previous datasets to our semantic space, we gather (image/video/skeleton/MoCap) datasets into a unified database in a unified label system, i. e., bridging "isolated islands" into a "Pangea".
1 code implementation • 28 Jul 2022 • Xiaoqian Wu, Yong-Lu Li, Xinpeng Liu, Junyi Zhang, Yuzhe Wu, Cewu Lu
Though significant progress has been made, interactiveness learning remains a challenging problem in HOI detection: existing methods usually generate redundant negative H-O pair proposals and fail to effectively extract interactive pairs.
Ranked #9 on
Human-Object Interaction Detection
on V-COCO
1 code implementation • CVPR 2022 • Xinpeng Liu, Yong-Lu Li, Xiaoqian Wu, Yu-Wing Tai, Cewu Lu, Chi-Keung Tang
Human-Object Interaction (HOI) detection plays a core role in activity understanding.
1 code implementation • 19 Feb 2022 • Xinpeng Liu, Yong-Lu Li, Cewu Lu
To achieve OC-immunity, we propose an OC-immune network that decouples the inputs from OC, extracts OC-immune representations, and leverages uncertainty quantification to generalize to unseen objects.
3 code implementations • 14 Feb 2022 • Yong-Lu Li, Xinpeng Liu, Xiaoqian Wu, Yizhuo Li, Zuoyu Qiu, Liang Xu, Yue Xu, Hao-Shu Fang, Cewu Lu
Human activity understanding is of widespread interest in artificial intelligence and spans diverse applications like health care and behavior analysis.
1 code implementation • 25 Jan 2021 • Yong-Lu Li, Xinpeng Liu, Xiaoqian Wu, Xijie Huang, Liang Xu, Cewu Lu
Human-Object Interaction (HOI) detection is an important problem to understand how humans interact with objects.
Ranked #28 on
Human-Object Interaction Detection
on V-COCO
2 code implementations • NeurIPS 2020 • Yong-Lu Li, Xinpeng Liu, Xiaoqian Wu, Yizhuo Li, Cewu Lu
Meanwhile, isolated human and object can also be integrated into coherent HOI again.
Ranked #20 on
Human-Object Interaction Detection
on V-COCO
1 code implementation • CVPR 2020 • Yong-Lu Li, Xinpeng Liu, Han Lu, Shiyi Wang, Junqi Liu, Jiefeng Li, Cewu Lu
In light of these, we propose a detailed 2D-3D joint representation learning method.
Ranked #1 on
Human-Object Interaction Detection
on Ambiguious-HOI
2 code implementations • CVPR 2020 • Yong-Lu Li, Liang Xu, Xinpeng Liu, Xijie Huang, Yue Xu, Shiyi Wang, Hao-Shu Fang, Ze Ma, Mingyang Chen, Cewu Lu
In light of this, we propose a new path: infer human part states first and then reason out the activities based on part-level semantics.
Ranked #3 on
Human-Object Interaction Detection
on HICO
4 code implementations • 13 Apr 2019 • Yong-Lu Li, Liang Xu, Xinpeng Liu, Xijie Huang, Yue Xu, Mingyang Chen, Ze Ma, Shiyi Wang, Hao-Shu Fang, Cewu Lu
To address these and promote the activity understanding, we build a large-scale Human Activity Knowledge Engine (HAKE) based on the human body part states.
Ranked #2 on
Human-Object Interaction Detection
on HICO
(using extra training data)