Search Results for author: Toshisada Mariyama

Found 4 papers, 1 papers with code

Stability-Certified Reinforcement Learning via Spectral Normalization

no code implementations26 Dec 2020 Ryoichi Takase, Nobuyuki Yoshikawa, Toshisada Mariyama, Takeshi Tsuchiya

While explicitly including the stability condition, the first method may provide an insufficient performance on the neural network controller due to its strict stability condition.

reinforcement-learning Reinforcement Learning (RL)

Deep Reactive Planning in Dynamic Environments

no code implementations31 Oct 2020 Kei Ota, Devesh K. Jha, Tadashi Onishi, Asako Kanezaki, Yusuke Yoshiyasu, Yoko SASAKI, Toshisada Mariyama, Daniel Nikovski

The main novelty of the proposed approach is that it allows a robot to learn an end-to-end policy which can adapt to changes in the environment during execution.

Can Increasing Input Dimensionality Improve Deep Reinforcement Learning?

1 code implementation ICML 2020 Kei Ota, Tomoaki Oiki, Devesh K. Jha, Toshisada Mariyama, Daniel Nikovski

We believe that stronger feature propagation together with larger networks (and thus larger search space) allows RL agents to learn more complex functions of states and thus improves the sample efficiency.

Decision Making reinforcement-learning +1

Trajectory Optimization for Unknown Constrained Systems using Reinforcement Learning

no code implementations13 Mar 2019 Kei Ota, Devesh K. Jha, Tomoaki Oiki, Mamoru Miura, Takashi Nammoto, Daniel Nikovski, Toshisada Mariyama

Our experiments show that our RL agent trained with a reference path outperformed a model-free PID controller of the type commonly used on many robotic platforms for trajectory tracking.

Motion Planning reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.