Search Results for author: Xinghang Li

Found 3 papers, 2 papers with code

Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation

1 code implementation20 Dec 2023 Hongtao Wu, Ya Jing, Chilam Cheang, Guangzeng Chen, Jiafeng Xu, Xinghang Li, Minghuan Liu, Hang Li, Tao Kong

In this paper, we extend the scope of this effectiveness by showing that visual robot manipulation can significantly benefit from large-scale video generative pre-training.

Ranked #2 on Zero-shot Generalization on CALVIN (using extra training data)

Robot Manipulation Zero-shot Generalization

Vision-Language Foundation Models as Effective Robot Imitators

no code implementations2 Nov 2023 Xinghang Li, Minghuan Liu, Hanbo Zhang, Cunjun Yu, Jie Xu, Hongtao Wu, Chilam Cheang, Ya Jing, Weinan Zhang, Huaping Liu, Hang Li, Tao Kong

We believe RoboFlamingo has the potential to be a cost-effective and easy-to-use solution for robotics manipulation, empowering everyone with the ability to fine-tune their own robotics policy.

Imitation Learning

Mixed Neural Voxels for Fast Multi-view Video Synthesis

1 code implementation ICCV 2023 Feng Wang, Sinan Tan, Xinghang Li, Zeyue Tian, Yafei Song, Huaping Liu

In this paper, we present a novel method named MixVoxels to better represent the dynamic scenes with fast training speed and competitive rendering qualities.

Cannot find the paper you are looking for? You can Submit a new open access paper.