1 code implementation • 26 Jan 2023 • Takuya Hiraoka, Takashi Onishi, Yoshimasa Tsuruoka
In reinforcement learning (RL) with experience replay, experiences stored in a replay buffer influence the RL agent's performance.
no code implementations • 8 Aug 2022 • Shumpei Kubosawa, Takashi Onishi, Yoshimasa Tsuruoka
During the operation of a chemical plant, product quality must be consistently maintained, and the production of off-specification products should be minimized.
no code implementations • 17 Jan 2022 • Shumpei Kubosawa, Takashi Onishi, Makoto Sakahara, Yoshimasa Tsuruoka
The system leverages reinforcement learning and a dynamic simulator that can simulate the railway traffic and passenger flow of a whole line.
2 code implementations • ICLR 2022 • Takuya Hiraoka, Takahisa Imagawa, Taisei Hashimoto, Takashi Onishi, Yoshimasa Tsuruoka
To make REDQ more computationally efficient, we propose a method of improving computational efficiency called DroQ, which is a variant of REDQ that uses a small ensemble of dropout Q-functions.
no code implementations • 4 Jun 2020 • Takuya Hiraoka, Takahisa Imagawa, Voot Tangkaratt, Takayuki Osa, Takashi Onishi, Yoshimasa Tsuruoka
Model-based meta-reinforcement learning (RL) methods have recently been shown to be a promising approach to improving the sample efficiency of RL in multi-task settings.
1 code implementation • NeurIPS 2019 • Takuya Hiraoka, Takahisa Imagawa, Tatsuya Mori, Takashi Onishi, Yoshimasa Tsuruoka
While there are several methods to learn options that are robust against the uncertainty of model parameters, these methods only consider either the worst case or the average (ordinary) case for learning options.
no code implementations • 6 Mar 2019 • Shumpei Kubosawa, Takashi Onishi, Yoshimasa Tsuruoka
Chemical plants are complex and dynamical systems consisting of many components for manipulation and sensing, whose state transitions depend on various factors such as time, disturbance, and operation procedures.
no code implementations • 29 Sep 2018 • Takuya Hiraoka, Takashi Onishi, Takahisa Imagawa, Yoshimasa Tsuruoka
In this paper, we propose a framework that can automatically refine symbol grounding functions and a high-level planner to reduce human effort for designing these modules.
no code implementations • 7 Sep 2018 • Seydou Ba, Takuya Hiraoka, Takashi Onishi, Toru Nakata, Yoshimasa Tsuruoka
The evaluation results show that, with variable simulation times, the proposed approach outperforms the conventional MCTS in the evaluated continuous decision space tasks and improves the performance of MCTS in most of the ALE tasks.
no code implementations • 28 Jun 2018 • Kazeto Yamamoto, Takashi Onishi, Yoshimasa Tsuruoka
One potential solution to this problem is to combine reinforcement learning with automated symbol planning and utilize prior knowledge on the domain.
Hierarchical Reinforcement Learning
reinforcement-learning
+1
no code implementations • 19 Jun 2018 • Shota Motoura, Kazeto Yamamoto, Shumpei Kubosawa, Takashi Onishi
This paper proposes a method to translate multilevel flow modeling (MFM) into a first-order language (FOL), which enables the utilisation of logical techniques, such as inference engines and abductive reasoners.