Search Results for author: Takahisa Imagawa

Found 7 papers, 2 papers with code

Unsupervised Discovery of Continuous Skills on a Sphere

no code implementations21 May 2023 Takahisa Imagawa, Takuya Hiraoka, Yoshimasa Tsuruoka

However, most of the existing methods learn a finite number of discrete skills, and thus the variety of behaviors that can be exhibited with the learned skills is limited.

Unsupervised Reinforcement Learning

Dropout Q-Functions for Doubly Efficient Reinforcement Learning

2 code implementations ICLR 2022 Takuya Hiraoka, Takahisa Imagawa, Taisei Hashimoto, Takashi Onishi, Yoshimasa Tsuruoka

To make REDQ more computationally efficient, we propose a method of improving computational efficiency called DroQ, which is a variant of REDQ that uses a small ensemble of dropout Q-functions.

Computational Efficiency Q-Learning +2

Meta-Model-Based Meta-Policy Optimization

no code implementations4 Jun 2020 Takuya Hiraoka, Takahisa Imagawa, Voot Tangkaratt, Takayuki Osa, Takashi Onishi, Yoshimasa Tsuruoka

Model-based meta-reinforcement learning (RL) methods have recently been shown to be a promising approach to improving the sample efficiency of RL in multi-task settings.

Continuous Control Meta-Learning +3

Optimistic Proximal Policy Optimization

no code implementations25 Jun 2019 Takahisa Imagawa, Takuya Hiraoka, Yoshimasa Tsuruoka

Reinforcement Learning, a machine learning framework for training an autonomous agent based on rewards, has shown outstanding results in various domains.

BIG-bench Machine Learning reinforcement-learning +1

Learning Robust Options by Conditional Value at Risk Optimization

1 code implementation NeurIPS 2019 Takuya Hiraoka, Takahisa Imagawa, Tatsuya Mori, Takashi Onishi, Yoshimasa Tsuruoka

While there are several methods to learn options that are robust against the uncertainty of model parameters, these methods only consider either the worst case or the average (ordinary) case for learning options.

Refining Manually-Designed Symbol Grounding and High-Level Planning by Policy Gradients

no code implementations29 Sep 2018 Takuya Hiraoka, Takashi Onishi, Takahisa Imagawa, Yoshimasa Tsuruoka

In this paper, we propose a framework that can automatically refine symbol grounding functions and a high-level planner to reduce human effort for designing these modules.

Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.