Search Results for author: Yingdong Hu

Found 13 papers, 5 papers with code

HuB: Learning Extreme Humanoid Balance

no code implementations12 May 2025 Tong Zhang, Boyuan Zheng, Ruiqian Nai, Yingdong Hu, Yen-Jen Wang, Geng Chen, Fanqi Lin, Jiongye Li, Chuye Hong, Koushil Sreenath, Yang Gao

The human body demonstrates exceptional motor capabilities-such as standing steadily on one foot or performing a high kick with the leg raised over 1. 5 meters-both requiring precise balance control.

Humanoid Control

Dynamics-Aware Gaussian Splatting Streaming Towards Fast On-the-Fly 4D Reconstruction

no code implementations22 Nov 2024 Zhening Liu, Yingdong Hu, Xinjie Zhang, Rui Song, Jiawei Shao, Zehong Lin, Jun Zhang

The recent development of 3D Gaussian Splatting (3DGS) has led to great interest in 4D dynamic spatial reconstruction.

3DGS 4D reconstruction

EVA-Gaussian: 3D Gaussian-based Real-time Human Novel View Synthesis under Diverse Camera Settings

no code implementations2 Oct 2024 Yingdong Hu, Zhening Liu, Jiawei Shao, Zehong Lin, Jun Zhang

To address this limitation, we propose a real-time pipeline named EVA-Gaussian for 3D human novel view synthesis across diverse camera settings.

Novel View Synthesis Position

Leveraging Locality to Boost Sample Efficiency in Robotic Manipulation

1 code implementation15 Jun 2024 Tong Zhang, Yingdong Hu, Jiacheng You, Yang Gao

SGRv2 excels in RLBench tasks with keyframe control using merely 5 demonstrations and surpasses the RVT baseline in 23 of 26 tasks.

Imitation Learning Inductive Bias +1

Look Before You Leap: Unveiling the Power of GPT-4V in Robotic Vision-Language Planning

no code implementations29 Nov 2023 Yingdong Hu, Fanqi Lin, Tong Zhang, Li Yi, Yang Gao

In this study, we are interested in imbuing robots with the capability of physically-grounded task planning.

Task Planning

Imitation Learning from Observation with Automatic Discount Scheduling

1 code implementation11 Oct 2023 Yuyang Liu, Weijun Dong, Yingdong Hu, Chuan Wen, Zhao-Heng Yin, Chongjie Zhang, Yang Gao

Nonetheless, we identify that tasks characterized by a progress dependency property pose significant challenges for such approaches; in these tasks, the agent needs to initially learn the expert's preceding behaviors before mastering the subsequent ones.

Imitation Learning reinforcement-learning +2

Policy Contrastive Imitation Learning

no code implementations6 Jul 2023 Jialei Huang, ZhaoHeng Yin, Yingdong Hu, Yang Gao

However, the performance of AIL is still unsatisfactory on the more challenging tasks.

Binary Classification Imitation Learning +1

A Universal Semantic-Geometric Representation for Robotic Manipulation

1 code implementation18 Jun 2023 Tong Zhang, Yingdong Hu, Hanchen Cui, Hang Zhao, Yang Gao

To this end, we present $\textbf{Semantic-Geometric Representation} (\textbf{SGR})$, a universal perception module for robotics that leverages the rich semantic information of large-scale pre-trained 2D models and inherits the merits of 3D spatial reasoning.

3D geometry Robot Manipulation +1

For Pre-Trained Vision Models in Motor Control, Not All Policy Learning Methods are Created Equal

no code implementations10 Apr 2023 Yingdong Hu, Renhao Wang, Li Erran Li, Yang Gao

Our study yields a series of intriguing results, including the discovery that the effectiveness of pre-training is highly dependent on the choice of the downstream policy learning algorithm.

All Imitation Learning +1

Semantic-Aware Fine-Grained Correspondence

1 code implementation21 Jul 2022 Yingdong Hu, Renhao Wang, Kaifeng Zhang, Yang Gao

Establishing visual correspondence across images is a challenging and essential task.

Pose Tracking Self-Supervised Learning +4

A Multi-channel Training Method Boost the Performance

no code implementations27 Dec 2021 Yingdong Hu

Deep convolutional neural network has made huge revolution and shown its superior performance on computer vision tasks such as classification and segmentation.

Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.