Search Results for author: Yoshihiro Okawa

Found 3 papers, 2 papers with code

Safe Exploration Method for Reinforcement Learning under Existence of Disturbance

1 code implementation30 Sep 2022 Yoshihiro Okawa, Tomotake Sasaki, Hitoshi Yanami, Toru Namerikawa

We define the safety during learning as satisfaction of the constraint conditions explicitly defined in terms of the state and propose a safe exploration method that uses partial prior knowledge of a controlled object and disturbance.

reinforcement-learning Reinforcement Learning (RL) +1

Two-step reinforcement learning for model-free redesign of nonlinear optimal regulator

1 code implementation5 Mar 2021 Mei Minami, Yuka Masumoto, Yoshihiro Okawa, Tomotake Sasaki, Yutaka Hori

To overcome this limitation, we propose a model-free two-step design approach that improves the transient learning performance of RL in an optimal regulator redesign problem for unknown nonlinear systems.

Offline RL reinforcement-learning +1

Automatic Exploration Process Adjustment for Safe Reinforcement Learning with Joint Chance Constraint Satisfaction

no code implementations5 Mar 2021 Yoshihiro Okawa, Tomotake Sasaki, Hidenao Iwane

In reinforcement learning (RL) algorithms, exploratory control inputs are used during learning to acquire knowledge for decision making and control, while the true dynamics of a controlled object is unknown.

Decision Making Object +3

Cannot find the paper you are looking for? You can Submit a new open access paper.