Search Results for author: Xiangyun Meng

Found 3 papers, 0 papers with code

LiDAR-UDA: Self-ensembling Through Time for Unsupervised LiDAR Domain Adaptation

no code implementations ICCV 2023 Amirreza Shaban, Joonho Lee, Sanghun Jung, Xiangyun Meng, Byron Boots

Existing self-training methods use a model trained on labeled source data to generate pseudo labels for target data and refine the predictions via fine-tuning the network on the pseudo labels.

Pseudo Label Unsupervised Domain Adaptation

Continuous Versatile Jumping Using Learned Action Residuals

no code implementations17 Apr 2023 Yuxiang Yang, Xiangyun Meng, Wenhao Yu, Tingnan Zhang, Jie Tan, Byron Boots

Jumping is essential for legged robots to traverse through difficult terrains.

Learning Semantics-Aware Locomotion Skills from Human Demonstration

no code implementations27 Jun 2022 Yuxiang Yang, Xiangyun Meng, Wenhao Yu, Tingnan Zhang, Jie Tan, Byron Boots

Using only 40 minutes of human demonstration data, our framework learns to adjust the speed and gait of the robot based on perceived terrain semantics, and enables the robot to walk over 6km without failure at close-to-optimal speed.

Cannot find the paper you are looking for? You can Submit a new open access paper.