1 code implementation • 4 Jul 2023 • Xiang Li, Varun Belagali, Jinghuan Shang, Michael S. Ryoo
Sequence modeling approaches have shown promising results in robot imitation learning.
2 code implementations • NeurIPS 2023 • Jinghuan Shang, Michael S. Ryoo
This learnable reward is assigned by sensorimotor reward module, incentivizes the sensory policy to select observations that are optimal to infer its own motor action, inspired by the sensorimotor stage of humans.
no code implementations • 27 Jun 2022 • Ryan Burgert, Jinghuan Shang, Xiang Li, Michael Ryoo
Unpaired image translation algorithms can be used for sim2real tasks, but many fail to generate temporally consistent results.
1 code implementation • 23 Jun 2022 • Jinghuan Shang, Srijan Das, Michael S. Ryoo
To this end, we propose a 3D Token Representation Layer (3DTRL) that estimates the 3D positional information of the visual tokens and leverages it for learning viewpoint-agnostic representations.
2 code implementations • 10 Jun 2022 • Xiang Li, Jinghuan Shang, Srijan Das, Michael S. Ryoo
We investigate whether self-supervised learning (SSL) can improve online reinforcement learning (RL) from pixels.
no code implementations • 24 May 2022 • Xueying Bai, Jinghuan Shang, Yifan Sun, Niranjan Balasubramanian
Continual learning (CL) aims to learn a sequence of tasks over time, with data distributions shifting from one task to another.
1 code implementation • 12 Oct 2021 • Jinghuan Shang, Kumara Kahatapitiya, Xiang Li, Michael S. Ryoo
Reinforcement Learning (RL) can be considered as a sequence modeling task: given a sequence of past state-action-reward experiences, an agent predicts a sequence of next actions.
no code implementations • 2 Aug 2021 • Jinghuan Shang, Michael S. Ryoo
Third-person imitation learning (TPIL) is the concept of learning action policies by observing other agents in a third-person view (TPV), similar to what humans do.