1 code implementation • 15 Mar 2024 • Carmelo Sferrazza, Dun-Ming Huang, Xingyu Lin, Youngwoon Lee, Pieter Abbeel
Humanoid robots hold great promise in assisting humans in diverse environments and tasks, due to their flexibility and adaptability leveraging human-like morphology.
no code implementations • 28 Dec 2023 • Chuan Wen, Xingyu Lin, John So, Kai Chen, Qi Dou, Yang Gao, Pieter Abbeel
Learning from demonstration is a powerful method for teaching robots new skills, and having more demonstration data often improves policy learning.
no code implementations • 29 Sep 2023 • Carl Qi, Yilin Wu, Lifan Yu, Haoyue Liu, Bowen Jiang, Xingyu Lin, David Held
We propose to learn a generative model of the tool-use trajectories as a sequence of tool point clouds, which generalizes to different tool shapes.
no code implementations • 7 Jul 2023 • Xingyu Lin, John So, Sashwat Mahalingam, Fangchen Liu, Pieter Abbeel
In this work, we present a focused study of the generalization capabilities of the pre-trained visual representations at the categorical level.
no code implementations • 19 Feb 2023 • Zixuan Huang, Xingyu Lin, David Held
In this work, we propose a self-supervised method to finetune a mesh reconstruction model in the real world.
no code implementations • 27 Oct 2022 • Xingyu Lin, Carl Qi, Yunchu Zhang, Zhiao Huang, Katerina Fragkiadaki, Yunzhu Li, Chuang Gan, David Held
Effective planning of long-horizon deformable object manipulation requires suitable abstractions at both the spatial and temporal levels.
1 code implementation • 6 Jun 2022 • Zixuan Huang, Xingyu Lin, David Held
We evaluate our system both on cloth flattening as well as on cloth canonicalization, in which the objective is to manipulate the cloth into a canonical pose.
no code implementations • ICLR 2022 • Xingyu Lin, Zhiao Huang, Yunzhu Li, Joshua B. Tenenbaum, David Held, Chuang Gan
We consider the problem of sequential robotic manipulation of deformable objects using tools.
1 code implementation • 3 Mar 2022 • Gautham Narayan Narasimhan, Kai Zhang, Ben Eisner, Xingyu Lin, David Held
Liquid state estimation is important for robotics tasks such as pouring; however, estimating the state of transparent liquids is a challenging problem.
no code implementations • 31 Dec 2021 • Hongcheng Guo, Xingyu Lin, Jian Yang, Yi Zhuang, Jiaqi Bai, Tieqiao Zheng, Bo Zhang, Zhoujun Li
Therefore, we propose a unified Transformer-based framework for log anomaly detection (\ourmethod{}), which is comprised of the pretraining and adapter-based tuning stage.
no code implementations • 3 Dec 2021 • Zizhao Hu, Ravikiran Chanumolu, Xingyu Lin, Nayela Ayaz, Vincent Chi
Cloze task is a widely used task to evaluate an NLP system's language understanding ability.
1 code implementation • 21 May 2021 • Xingyu Lin, YuFei Wang, Zixuan Huang, David Held
Robotic manipulation of cloth remains challenging for robotics due to the complex dynamics of the cloth, lack of a low-dimensional state representation, and self-occlusions.
2 code implementations • 14 Nov 2020 • Xingyu Lin, YuFei Wang, Jake Olkin, David Held
Further, we evaluate a variety of algorithms on these tasks and highlight challenges for reinforcement learning algorithms, including dealing with a state representation that has a high intrinsic dimensionality and is partially observable.
1 code implementation • 13 Nov 2020 • YuFei Wang, Gautham Narayan Narasimhan, Xingyu Lin, Brian Okorn, David Held
Current image-based reinforcement learning (RL) algorithms typically operate on the whole image without performing object-level reasoning.
1 code implementation • NeurIPS 2019 • Xingyu Lin, Harjatin Baweja, George Kantor, David Held
Reinforcement learning is known to be sample inefficient, preventing its application to many real-world problems, especially with high dimensional observations like images.
no code implementations • 20 May 2019 • Xingyu Lin, Harjatin Singh Baweja, David Held
However, if this policy is trained with reinforcement learning, then without a state estimator, it is hard to specify a reward function based on high-dimensional observations.
no code implementations • 15 Mar 2019 • Xingyu Lin, Pengsheng Guo, Carlos Florensa, David Held
Robots that are trained to perform a task in a fixed environment often fail when facing unexpected changes to the environment due to a lack of exploration.
no code implementations • 22 May 2017 • Hao Wang, Xingyu Lin, Yimeng Zhang, Tai Sing Lee
Trained on imagined occluded scenarios under the object persistence constraint, our network discovered more subtle and localized image features that were neglected by the original network for object classification, obtaining better separability of different object classes in the feature space.
no code implementations • 31 Mar 2017 • Xingyu Lin, Hao Wang, Zhihao LI, Yimeng Zhang, Alan Yuille, Tai Sing Lee
We develop a model of perceptual similarity judgment based on re-training a deep convolution neural network (DCNN) that learns to associate different views of each 3D object to capture the notion of object persistence and continuity in our visual experience.