no code implementations • 9 Nov 2023 • Sammy Christen, Lan Feng, Wei Yang, Yu-Wei Chao, Otmar Hilliges, Jie Song
In this paper, we introduce a framework that can generate plausible human grasping motions suitable for training the robot.
no code implementations • 10 Jul 2023 • Yuzhe Qin, Wei Yang, Binghao Huang, Karl Van Wyk, Hao Su, Xiaolong Wang, Yu-Wei Chao, Dieter Fox
For real-world experiments, AnyTeleop can outperform a previous system that was designed for a specific robot hardware with a higher success rate, using the same robot.
1 code implementation • 6 Jul 2023 • Jishnu Jaykumar P, Kamalesh Palanisamy, Yu-Wei Chao, Xinya Du, Yu Xiang
The two encoders are used to compute prototypes of image classes for classification.
1 code implementation • 26 Jun 2023 • Ankit Goyal, Jie Xu, Yijie Guo, Valts Blukis, Yu-Wei Chao, Dieter Fox
In simulations, we find that a single RVT model works well across 18 RLBench tasks with 249 task variations, achieving 26% higher relative success than the existing state-of-the-art method (PerAct).
Ranked #2 on
Robot Manipulation
on RLBench
no code implementations • CVPR 2023 • Sammy Christen, Wei Yang, Claudia Pérez-D'Arpino, Otmar Hilliges, Dieter Fox, Yu-Wei Chao
We propose the first framework to learn control policies for vision-based human-to-robot handovers, a critical task for human-robot interaction.
no code implementations • 28 Sep 2022 • Zoey Qiuyu Chen, Karl Van Wyk, Yu-Wei Chao, Wei Yang, Arsalan Mousavian, Abhishek Gupta, Dieter Fox
The policy learned from our dataset can generalize well on unseen object poses in both simulation and the real world
2 code implementations • 6 Jul 2022 • Jishnu Jaykumar P, Yu-Wei Chao, Yu Xiang
We introduce the Few-Shot Object Learning (FewSOL) dataset for object recognition with a few images per object.
no code implementations • 19 May 2022 • Yu-Wei Chao, Chris Paxton, Yu Xiang, Wei Yang, Balakumar Sundaralingam, Tao Chen, Adithyavairavan Murali, Maya Cakmak, Dieter Fox
We analyze the performance of a set of baselines and show a correlation with a real-world evaluation.
no code implementations • 31 Mar 2022 • Wei Yang, Balakumar Sundaralingam, Chris Paxton, Iretiayo Akinola, Yu-Wei Chao, Maya Cakmak, Dieter Fox
However, how to responsively generate smooth motions to take an object from a human is still an open question.
no code implementations • CVPR 2022 • Ankit Goyal, Arsalan Mousavian, Chris Paxton, Yu-Wei Chao, Brian Okorn, Jia Deng, Dieter Fox
Accurate object rearrangement from vision is a crucial problem for a wide variety of real-world robotics applications in unstructured environments.
no code implementations • 9 Nov 2021 • Andreea Bobu, Chris Paxton, Wei Yang, Balakumar Sundaralingam, Yu-Wei Chao, Maya Cakmak, Dieter Fox
Second, we treat this low-dimensional concept as an automatic labeler to synthesize a large-scale high-dimensional data set with the simulator.
2 code implementations • CVPR 2021 • Yu-Wei Chao, Wei Yang, Yu Xiang, Pavlo Molchanov, Ankur Handa, Jonathan Tremblay, Yashraj S. Narang, Karl Van Wyk, Umar Iqbal, Stan Birchfield, Jan Kautz, Dieter Fox
We introduce DexYCB, a new dataset for capturing hand grasping of objects.
no code implementations • 17 Nov 2020 • Wei Yang, Chris Paxton, Arsalan Mousavian, Yu-Wei Chao, Maya Cakmak, Dieter Fox
We demonstrate the generalizability, usability, and robustness of our approach on a novel benchmark set of 26 diverse household objects, a user study with naive users (N=6) handing over a subset of 15 objects, and a systematic evaluation examining different ways of handing objects.
no code implementations • 13 Nov 2019 • De-An Huang, Yu-Wei Chao, Chris Paxton, Xinke Deng, Li Fei-Fei, Juan Carlos Niebles, Animesh Garg, Dieter Fox
We further show that by using the automatically inferred goal from the video demonstration, our robot is able to reproduce the same task in a real kitchen environment.
no code implementations • 7 Oct 2019 • Ankur Handa, Karl Van Wyk, Wei Yang, Jacky Liang, Yu-Wei Chao, Qian Wan, Stan Birchfield, Nathan Ratliff, Dieter Fox
Teleoperation offers the possibility of imparting robotic systems with sophisticated reasoning skills, intuition, and creativity to perform tasks.
no code implementations • 20 Aug 2019 • Yu-Wei Chao, Jimei Yang, Weifeng Chen, Jia Deng
We experimentally demonstrate the strength of our approach over different non-hierarchical and hierarchical baselines.
no code implementations • 7 Aug 2018 • Parker Hill, Babak Zamirai, Shengshuo Lu, Yu-Wei Chao, Michael Laurenzano, Mehrzad Samadi, Marios Papaefthymiou, Scott Mahlke, Thomas Wenisch, Jia Deng, Lingjia Tang, Jason Mars
With ever-increasing computational demand for deep learning, it is critical to investigate the implications of the numeric representation and precision of DNN model weights and activations on computational efficiency.
no code implementations • CVPR 2018 • Yu-Wei Chao, Sudheendra Vijayanarasimhan, Bryan Seybold, David A. Ross, Jia Deng, Rahul Sukthankar
We propose TAL-Net, an improved approach to temporal action localization in video that is inspired by the Faster R-CNN object detection framework.
Ranked #22 on
Temporal Action Localization
on THUMOS’14
no code implementations • CVPR 2017 • Yu-Wei Chao, Jimei Yang, Brian Price, Scott Cohen, Jia Deng
This paper presents the first study on forecasting human dynamics from static images.
no code implementations • 17 Feb 2017 • Yu-Wei Chao, Yunfan Liu, Xieyang Liu, Huayi Zeng, Jia Deng
We study the problem of detecting human-object interactions (HOI) in static images, defined as predicting a human and an object bounding box with an interaction class label that connects them.
no code implementations • ICCV 2015 • Yu-Wei Chao, Zhan Wang, Yugeng He, Jiaxuan Wang, Jia Deng
We introduce a new benchmark "Humans Interacting with Common Objects" (HICO) for recognizing human-object interactions (HOI).
no code implementations • CVPR 2015 • Yu-Wei Chao, Zhan Wang, Rada Mihalcea, Jia Deng
In this paper we introduce the new problem of mining the knowledge of semantic affordance: given an object, determining whether an action can be performed on it.
no code implementations • CVPR 2013 • Wongun Choi, Yu-Wei Chao, Caroline Pantofaru, Silvio Savarese
Visual scene understanding is a difficult problem interleaving object detection, geometric reasoning and scene classification.
Ranked #7 on
Room Layout Estimation
on SUN RGB-D