Search Results for author: Takashi Nagata

Found 2 papers, 1 papers with code

Policy Distillation with Selective Input Gradient Regularization for Efficient Interpretability

no code implementations18 May 2022 Jinwei Xing, Takashi Nagata, Xinyun Zou, Emre Neftci, Jeffrey L. Krichmar

Although deep Reinforcement Learning (RL) has proven successful in a wide range of tasks, one challenge it faces is interpretability when applied to real-world problems.

Autonomous Driving Reinforcement Learning (RL)

Domain Adaptation In Reinforcement Learning Via Latent Unified State Representation

1 code implementation10 Feb 2021 Jinwei Xing, Takashi Nagata, Kexin Chen, Xinyun Zou, Emre Neftci, Jeffrey L. Krichmar

To address this issue, we propose a two-stage RL agent that first learns a latent unified state representation (LUSR) which is consistent across multiple domains in the first stage, and then do RL training in one source domain based on LUSR in the second stage.

Autonomous Driving Domain Adaptation +5

Cannot find the paper you are looking for? You can Submit a new open access paper.