Search Results for author: Dongheui Lee

Found 24 papers, 5 papers with code

Robot Interaction Behavior Generation based on Social Motion Forecasting for Human-Robot Interaction

no code implementations7 Feb 2024 Esteve Valls Mascaro, Yashuai Yan, Dongheui Lee

Integrating robots into populated environments is a complex challenge that requires an understanding of human social dynamics.

Motion Forecasting

ImitationNet: Unsupervised Human-to-Robot Motion Retargeting via Shared Latent Space

no code implementations11 Sep 2023 Yashuai Yan, Esteve Valls Mascaro, Dongheui Lee

Additionally, we propose a consistency term to build a common latent space that captures the similarity of the poses with precision while allowing direct robot motion control from the latent space.

Collision Avoidance Contrastive Learning +1

A Unified Masked Autoencoder with Patchified Skeletons for Motion Synthesis

no code implementations14 Aug 2023 Esteve Valls Mascaro, Hyemin Ahn, Dongheui Lee

Experimental results show that our model successfully forecasts human motion on the Human3. 6M dataset.

Motion Synthesis

Input-Output Feedback Linearization Preserving Task Priority for Multivariate Nonlinear Systems Having Singular Input Gain Matrix

no code implementations3 May 2023 Sang-ik An, Dongheui Lee, Gyunghoon Park

The key observation is that the usual input-output linearization problem can be interpreted as the problem of solving simultaneous linear equations associated with the input gain matrix: thus, even at points where the input gain matrix becomes singular, it is still possible to solve a part of linear equations, by which a subset of input-output relations is made linear or close to be linear.

Can We Use Diffusion Probabilistic Models for 3D Motion Prediction?

no code implementations28 Feb 2023 Hyemin Ahn, Esteve Valls Mascaro, Dongheui Lee

After many researchers observed fruitfulness from the recent diffusion probabilistic model, its effectiveness in image generation is actively studied these days.

Image Generation motion prediction

Robust Human Motion Forecasting using Transformer-based Model

no code implementations16 Feb 2023 Esteve Valls Mascaro, Shuo Ma, Hyemin Ahn, Dongheui Lee

In addition, our model is tested in conditions where the human motion is severely occluded, demonstrating its robustness in reconstructing and predicting 3D human motion in a highly noisy environment.

Motion Forecasting

Intention-Conditioned Long-Term Human Egocentric Action Forecasting

1 code implementation25 Jul 2022 Esteve Valls Mascaro, Hyemin Ahn, Dongheui Lee

Our framework first extracts two level of human information over the N observed videos human actions through a Hierarchical Multi-task MLP Mixer (H3M).

Action Anticipation Long Term Action Anticipation

Long-Horizon Planning and Execution with Functional Object-Oriented Networks

no code implementations12 Jul 2022 David Paulius, Alejandro Agostini, Dongheui Lee

We demonstrate our entire approach on long-horizon tasks in CoppeliaSim and show how learned action contexts can be extended to never-before-seen scenarios.

Motion Planning Object +1

A Road-map to Robot Task Execution with the Functional Object-Oriented Network

no code implementations1 Jun 2021 David Paulius, Alejandro Agostini, Yu Sun, Dongheui Lee

Following work on joint object-action representations, the functional object-oriented network (FOON) was introduced as a knowledge graph representation for robots.

Refining Action Segmentation With Hierarchical Video Representations

1 code implementation ICCV 2021 Hyemin Ahn, Dongheui Lee

In this paper, we propose Hierarchical Action Segmentation Refiner (HASR), which can refine temporal action segmentation results from various models by understanding the overall context of a given video in a hierarchical way.

Action Segmentation Segmentation

Visually Grounding Language Instruction for History-Dependent Manipulation

no code implementations16 Dec 2020 Hyemin Ahn, Obin Kwon, Kyoungdo Kim, Jaeyeon Jeong, Howoong Jun, Hongjung Lee, Dongheui Lee, Songhwai Oh

We also suggest a relevant dataset and model which can be a baseline, and show that our model trained with the proposed dataset can also be applied to the real world based on the CycleGAN.

Efficient State Abstraction using Object-centered Predicates for Manipulation Planning

no code implementations16 Jul 2020 Alejandro Agostini, Dongheui Lee

To tackle these limitations we propose an object-centered representation that permits characterizing a much wider set of possible changes in configuration spaces than the traditional observer perspective counterpart.

Object

A Human Action Descriptor Based on Motion Coordination

no code implementations20 Nov 2019 Pietro Falco, Matteo Saveriano, Eka Gibran Hasany, Nicholas H. Kirk, Dongheui Lee

The second step enriches the descriptor considering minimum and maximum joint velocities and the correlations between the most informative joints.

On Policy Learning Robust to Irreversible Events: An Application to Robotic In-Hand Manipulation

no code implementations20 Nov 2019 Pietro Falco, Abdallah Attawia, Matteo Saveriano, Dongheui Lee

This way, the occurrence of object slipping during the learning procedure, which we consider an irreversible event, is significantly reduced.

reinforcement-learning Reinforcement Learning (RL)

Point-to-Pose Voting based Hand Pose Estimation using Residual Permutation Equivariant Layer

no code implementations CVPR 2019 Shile Li, Dongheui Lee

In addition to the pose estimation task, the voting-based scheme can also provide point cloud segmentation result without ground-truth for segmentation.

Hand Pose Estimation Point Cloud Segmentation

A Preliminary Study on the Learning Informativeness of Data Subsets

no code implementations28 Sep 2015 Simon Kaltenbacher, Nicholas H. Kirk, Dongheui Lee

We prove the concept on human-written texts, and conjecture this work will reduce training data size of sequential instructions, while preserving semantic relations, when gathering information from large remote sources.

Informativeness

Cannot find the paper you are looking for? You can Submit a new open access paper.