1 code implementation • 20 Dec 2023 • Hongtao Wu, Ya Jing, Chilam Cheang, Guangzeng Chen, Jiafeng Xu, Xinghang Li, Minghuan Liu, Hang Li, Tao Kong
In this paper, we extend the scope of this effectiveness by showing that visual robot manipulation can significantly benefit from large-scale video generative pre-training.
Ranked #2 on Zero-shot Generalization on CALVIN (using extra training data)
no code implementations • 2 Nov 2023 • Xinghang Li, Minghuan Liu, Hanbo Zhang, Cunjun Yu, Jie Xu, Hongtao Wu, Chilam Cheang, Ya Jing, Weinan Zhang, Huaping Liu, Hang Li, Tao Kong
We believe RoboFlamingo has the potential to be a cost-effective and easy-to-use solution for robotics manipulation, empowering everyone with the ability to fine-tune their own robotics policy.
no code implementations • 9 May 2022 • Chilam Cheang, Haitao Lin, Yanwei Fu, xiangyang xue
This paper studies the task of any objects grasping from the known categories by free-form language instructions.
no code implementations • 9 May 2022 • Haitao Lin, Chilam Cheang, Yanwei Fu, xiangyang xue
The physical robot experiments confirm the utility of our method in object-cluttered scenes.
no code implementations • CVPR 2022 • Haitao Lin, Zichang Liu, Chilam Cheang, Yanwei Fu, Guodong Guo, xiangyang xue
The concatenation of the observed point cloud and symmetric one reconstructs a coarse object shape, thus facilitating object center (3D translation) and 3D size estimation.