Search Results for author: Sotetsu Koyamada

Found 7 papers, 2 papers with code

End-to-End Policy Gradient Method for POMDPs and Explainable Agents

no code implementations19 Apr 2023 Soichiro Nishimori, Sotetsu Koyamada, Shin Ishii

We proposed an RL algorithm that estimates the hidden states by end-to-end training, and visualize the estimation as a state-transition graph.

Autonomous Driving Decision Making +2

Suphx: Mastering Mahjong with Deep Reinforcement Learning

no code implementations30 Mar 2020 Junjie Li, Sotetsu Koyamada, Qiwei Ye, Guoqing Liu, Chao Wang, Ruihan Yang, Li Zhao, Tao Qin, Tie-Yan Liu, Hsiao-Wuen Hon

Artificial Intelligence (AI) has achieved great success in many domains, and game AI is widely regarded as its beachhead since the dawn of AI.

reinforcement-learning Reinforcement Learning (RL)

Neural Sequence Model Training via $α$-divergence Minimization

1 code implementation30 Jun 2017 Sotetsu Koyamada, Yuta Kikuchi, Atsunori Kanemura, Shin-ichi Maeda, Shin Ishii

We propose a new neural sequence model training method in which the objective function is defined by $\alpha$-divergence.

Machine Translation reinforcement-learning +2

Deep learning of fMRI big data: a novel approach to subject-transfer decoding

no code implementations31 Jan 2015 Sotetsu Koyamada, Yumi Shikauchi, Ken Nakae, Masanori Koyama, Shin Ishii

Our PSA successfully visualized the subject-independent features contributing to the subject-transferability of the trained decoder.

Brain Decoding Subject Transfer

Principal Sensitivity Analysis

no code implementations21 Dec 2014 Sotetsu Koyamada, Masanori Koyama, Ken Nakae, Shin Ishii

We then visualize the PSMs to demonstrate the PSA's ability to decompose the knowledge acquired by the trained classifiers.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.