Search Results for author: Hongchen Luo

Found 10 papers, 7 papers with code

Intention-driven Ego-to-Exo Video Generation

no code implementations14 Mar 2024 Hongchen Luo, Kai Zhu, Wei Zhai, Yang Cao

Finally, the inferred human movement and high-level action descriptions jointly guide the generation of exocentric motion and interaction content (i. e., corresponding optical flow and occlusion maps) in the backward process of the diffusion model, ultimately warping them into the corresponding exocentric video.

Optical Flow Estimation Stereo Matching +1

LEMON: Learning 3D Human-Object Interaction Relation from 2D Images

no code implementations14 Dec 2023 Yuhang Yang, Wei Zhai, Hongchen Luo, Yang Cao, Zheng-Jun Zha

Which underexploit certain correlations between the interaction counterparts (human and object), and struggle to address the uncertainty in interactions.

Human-Object Interaction Detection Object +1

Grounding 3D Object Affordance from 2D Interactions in Images

1 code implementation ICCV 2023 Yuhang Yang, Wei Zhai, Hongchen Luo, Yang Cao, Jiebo Luo, Zheng-Jun Zha

Comprehensive experiments on PIAD demonstrate the reliability of the proposed task and the superiority of our method.

Object

Leverage Interactive Affinity for Affordance Learning

1 code implementation CVPR 2023 Hongchen Luo, Wei Zhai, Jing Zhang, Yang Cao, DaCheng Tao

Perceiving potential "action possibilities" (i. e., affordance) regions of images and learning interactive functionalities of objects from human demonstration is a challenging task due to the diversity of human-object interactions.

Human-Object Interaction Detection Object

Grounded Affordance from Exocentric View

2 code implementations28 Aug 2022 Hongchen Luo, Wei Zhai, Jing Zhang, Yang Cao, DaCheng Tao

Due to the diversity of interactive affordance, the uniqueness of different individuals leads to diverse interactions, which makes it difficult to establish an explicit link between object parts and affordance labels.

Human-Object Interaction Detection Object +1

Learning Affordance Grounding from Exocentric Images

2 code implementations CVPR 2022 Hongchen Luo, Wei Zhai, Jing Zhang, Yang Cao, DaCheng Tao

To empower an agent with such ability, this paper proposes a task of affordance grounding from exocentric view, i. e., given exocentric human-object interaction and egocentric object images, learning the affordance knowledge of the object and transferring it to the egocentric image using only the affordance label as supervision.

Human-Object Interaction Detection Object +1

Phrase-Based Affordance Detection via Cyclic Bilateral Interaction

4 code implementations24 Feb 2022 Liangsheng Lu, Wei Zhai, Hongchen Luo, Yu Kang, Yang Cao

In this paper, we explore to perceive affordance from a vision-language perspective and consider the challenging phrase-based affordance detection problem, i. e., given a set of phrases describing the action purposes, all the object regions in a scene with the same affordance should be detected.

Affordance Detection

Learning Visual Affordance Grounding from Demonstration Videos

no code implementations12 Aug 2021 Hongchen Luo, Wei Zhai, Jing Zhang, Yang Cao, DaCheng Tao

For the object branch, we introduce a semantic enhancement module (SEM) to make the network focus on different parts of the object according to the action classes and utilize a distillation loss to align the output features of the object branch with that of the video branch and transfer the knowledge in the video branch to the object branch.

Action Recognition Object +1

One-Shot Object Affordance Detection in the Wild

1 code implementation8 Aug 2021 Wei Zhai, Hongchen Luo, Jing Zhang, Yang Cao, DaCheng Tao

To empower robots with this ability in unseen scenarios, we first study the challenging one-shot affordance detection problem in this paper, i. e., given a support image that depicts the action purpose, all objects in a scene with the common affordance should be detected.

Action Recognition Affordance Detection +3

One-Shot Affordance Detection

2 code implementations28 Jun 2021 Hongchen Luo, Wei Zhai, Jing Zhang, Yang Cao, DaCheng Tao

To empower robots with this ability in unseen scenarios, we consider the challenging one-shot affordance detection problem in this paper, i. e., given a support image that depicts the action purpose, all objects in a scene with the common affordance should be detected.

4k Affordance Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.