Search Results for author: Ri-Zhao Qiu

Found 9 papers, 1 papers with code

AMO: Adaptive Motion Optimization for Hyper-Dexterous Humanoid Whole-Body Control

no code implementations6 May 2025 Jialong Li, Xuxin Cheng, Tianshu Huang, Shiqi Yang, Ri-Zhao Qiu, Xiaolong Wang

Humanoid robots derive much of their dexterity from hyper-dexterous whole-body movements, enabling tasks that require a large operational workspace: such as picking objects off the ground.

Imitation Learning Reinforcement Learning (RL)

Visual Acoustic Fields

no code implementations31 Mar 2025 Yuelei Li, HyunJin Kim, Fangneng Zhan, Ri-Zhao Qiu, Mazeyu Ji, Xiaojun Shan, Xueyan Zou, Paul Liang, Hanspeter Pfister, Xiaolong Wang

Meanwhile, the sound localization module enables querying the 3D scene, represented by the feature-augmented 3DGS, to localize hitting positions based on the sound sources.

3DGS

M3: 3D-Spatial MultiModal Memory

1 code implementation20 Mar 2025 Xueyan Zou, Yuchen Song, Ri-Zhao Qiu, Xuanbin Peng, Jianglong Ye, Sifei Liu, Xiaolong Wang

We present 3D Spatial MultiModal Memory (M3), a multimodal memory system designed to retain information about medium-sized static scenes through video sources for visual perception.

Feature Splatting

Humanoid Policy ~ Human Policy

no code implementations17 Mar 2025 Ri-Zhao Qiu, Shiqi Yang, Xuxin Cheng, Chaitanya Chawla, Jialong Li, Tairan He, Ge Yan, David J. Yoon, Ryan Hoque, Lars Paulsen, Ge Yang, Jian Zhang, Sha Yi, Guanya Shi, Xiaolong Wang

The state-action space of HAT is unified for both humans and humanoid robots and can be differentiably retargeted to robot actions.

WildLMa: Long Horizon Loco-Manipulation in the Wild

no code implementations22 Nov 2024 Ri-Zhao Qiu, Yuchen Song, Xuanbin Peng, Sai Aneesh Suryadevara, Ge Yang, Minghuan Liu, Mazeyu Ji, Chengzhe Jia, Ruihan Yang, Xueyan Zou, Xiaolong Wang

'In-the-wild' mobile manipulation aims to deploy robots in diverse real-world environments, which requires the robot to (1) have skills that generalize across object configurations; (2) be capable of long-horizon task execution in diverse environments; and (3) perform complex manipulation beyond pick-and-place.

Imitation Learning

GraspSplats: Efficient Manipulation with 3D Feature Splatting

no code implementations3 Sep 2024 Mazeyu Ji, Ri-Zhao Qiu, Xueyan Zou, Xiaolong Wang

With extensive experiments on a Franka robot, we demonstrate that GraspSplats significantly outperforms existing methods under diverse task settings.

Feature Splatting NeRF

Feature Splatting: Language-Driven Physics-Based Scene Synthesis and Editing

no code implementations1 Apr 2024 Ri-Zhao Qiu, Ge Yang, Weijia Zeng, Xiaolong Wang

Scene representations using 3D Gaussian primitives have produced excellent results in modeling the appearance of static and dynamic 3D scenes.

Feature Splatting

Visual Whole-Body Control for Legged Loco-Manipulation

no code implementations25 Mar 2024 Minghuan Liu, Zixuan Chen, Xuxin Cheng, Yandong Ji, Ri-Zhao Qiu, Ruihan Yang, Xiaolong Wang

We propose a framework that can conduct the whole-body control autonomously with visual observations.

Position

Learning Generalizable Feature Fields for Mobile Manipulation

no code implementations12 Mar 2024 Ri-Zhao Qiu, Yafei Hu, Yuchen Song, Ge Yang, Yang Fu, Jianglong Ye, Jiteng Mu, Ruihan Yang, Nikolay Atanasov, Sebastian Scherer, Xiaolong Wang

An open problem in mobile manipulation is how to represent objects and scenes in a unified manner so that robots can use both for navigation and manipulation.

Novel View Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.