Search Results for author: Jinghuan Shang

Found 8 papers, 5 papers with code

Active Vision Reinforcement Learning under Limited Visual Observability

2 code implementations NeurIPS 2023 Jinghuan Shang, Michael S. Ryoo

This learnable reward is assigned by sensorimotor reward module, incentivizes the sensory policy to select observations that are optimal to infer its own motor action, inspired by the sensorimotor stage of humans.

reinforcement-learning

Neural Neural Textures Make Sim2Real Consistent

no code implementations27 Jun 2022 Ryan Burgert, Jinghuan Shang, Xiang Li, Michael Ryoo

Unpaired image translation algorithms can be used for sim2real tasks, but many fail to generate temporally consistent results.

Translation

Learning Viewpoint-Agnostic Visual Representations by Recovering Tokens in 3D Space

1 code implementation23 Jun 2022 Jinghuan Shang, Srijan Das, Michael S. Ryoo

To this end, we propose a 3D Token Representation Layer (3DTRL) that estimates the 3D positional information of the visual tokens and leverages it for learning viewpoint-agnostic representations.

Action Recognition Image Classification +1

StARformer: Transformer with State-Action-Reward Representations for Visual Reinforcement Learning

1 code implementation12 Oct 2021 Jinghuan Shang, Kumara Kahatapitiya, Xiang Li, Michael S. Ryoo

Reinforcement Learning (RL) can be considered as a sequence modeling task: given a sequence of past state-action-reward experiences, an agent predicts a sequence of next actions.

Imitation Learning Inductive Bias +3

Self-Supervised Disentangled Representation Learning for Third-Person Imitation Learning

no code implementations2 Aug 2021 Jinghuan Shang, Michael S. Ryoo

Third-person imitation learning (TPIL) is the concept of learning action policies by observing other agents in a third-person view (TPV), similar to what humans do.

Imitation Learning Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.