Search Results for author: Tzu-Yun Shann

Found 5 papers, 0 papers with code

Adversarial Active Exploration for Inverse Dynamics Model Learning

no code implementations ICLR 2019 Zhang-Wei Hong, Tsu-Jui Fu, Tzu-Yun Shann, Yi-Hsiang Chang, Chun-Yi Lee

Our framework consists of a deep reinforcement learning (DRL) agent and an inverse dynamics model contesting with each other.

Imitation Learning

Diversity-Driven Exploration Strategy for Deep Reinforcement Learning

no code implementations NeurIPS 2018 Zhang-Wei Hong, Tzu-Yun Shann, Shih-Yang Su, Yi-Hsiang Chang, Chun-Yi Lee

Efficient exploration remains a challenging research problem in reinforcement learning, especially when an environment contains large state spaces, deceptive local optima, or sparse rewards.

Efficient Exploration reinforcement-learning +1

Virtual-to-Real: Learning to Control in Visual Semantic Segmentation

no code implementations1 Feb 2018 Zhang-Wei Hong, Chen Yu-Ming, Shih-Yang Su, Tzu-Yun Shann, Yi-Hsiang Chang, Hsuan-Kung Yang, Brian Hsi-Lin Ho, Chih-Chieh Tu, Yueh-Chuan Chang, Tsu-Ching Hsiao, Hsin-Wei Hsiao, Sih-Pin Lai, Chun-Yi Lee

Collecting training data from the physical world is usually time-consuming and even dangerous for fragile robots, and thus, recent advances in robot learning advocate the use of simulators as the training platform.

Image Segmentation Segmentation +1

A Deep Policy Inference Q-Network for Multi-Agent Systems

no code implementations21 Dec 2017 Zhang-Wei Hong, Shih-Yang Su, Tzu-Yun Shann, Yi-Hsiang Chang, Chun-Yi Lee

DPIQN incorporates the learned policy features as a hidden vector into its own deep Q-network (DQN), such that it is able to predict better Q values for the controllable agents than the state-of-the-art deep reinforcement learning models.

Cannot find the paper you are looking for? You can Submit a new open access paper.