Search Results for author: Hsuan-Kung Yang

Found 9 papers, 1 papers with code

Vision based Virtual Guidance for Navigation

no code implementations5 Mar 2023 Hsuan-Kung Yang, Yu-Ying Chen, Tsung-Chih Chiang, Chia-Chuan Hsu, Chun-Chia Huang, Chun-Wei Huang, Jou-Min Liu, Ting-Ru Liu, Tsu-Ching Hsiao, Chun-Yi Lee

This paper explores the impact of virtual guidance on mid-level representation-based navigation, where an agent performs navigation tasks based solely on visual observations.

Unity

Pixel-Wise Prediction based Visual Odometry via Uncertainty Estimation

no code implementations18 Aug 2022 Hao-Wei Chen, Ting-Hsuan Liao, Hsuan-Kung Yang, Chun-Yi Lee

This paper introduces pixel-wise prediction based visual odometry (PWVO), which is a dense prediction task that evaluates the values of translation and rotation for every pixel in its input observations.

Translation Visual Odometry

Investigation of Factorized Optical Flows as Mid-Level Representations

no code implementations9 Mar 2022 Hsuan-Kung Yang, Tsu-Ching Hsiao, Ting-Hsuan Liao, Hsu-Shen Liu, Li-Yuan Tsao, Tzu-Wen Wang, Shan-Ya Yang, Yu-Wen Chen, Huang-Ru Liao, Chun-Yi Lee

In this paper, we introduce a new concept of incorporating factorized flow maps as mid-level representations, for bridging the perception and the control modules in modular learning based robotic frameworks.

Optical Flow Estimation reinforcement-learning +1

Mixture of Step Returns in Bootstrapped DQN

no code implementations16 Jul 2020 Po-Han Chiang, Hsuan-Kung Yang, Zhang-Wei Hong, Chun-Yi Lee

Nevertheless, integrating step returns into a single target sacrifices the diversity of the advantages offered by different step return targets.

Exploration via Flow-Based Intrinsic Rewards

1 code implementation24 May 2019 Hsuan-Kung Yang, Po-Han Chiang, Min-Fong Hong, Chun-Yi Lee

Exploration bonuses derived from the novelty of observations in an environment have become a popular approach to motivate exploration for reinforcement learning (RL) agents in the past few years.

Atari Games Optical Flow Estimation +1

Never Forget: Balancing Exploration and Exploitation via Learning Optical Flow

no code implementations24 Jan 2019 Hsuan-Kung Yang, Po-Han Chiang, Kuan-Wei Ho, Min-Fong Hong, Chun-Yi Lee

We propose to employ optical flow estimation errors to examine the novelty of new observations, such that agents are able to memorize and understand the visited states in a more comprehensive fashion.

Optical Flow Estimation

Visual Relationship Prediction via Label Clustering and Incorporation of Depth Information

no code implementations9 Sep 2018 Hsuan-Kung Yang, An-Chieh Cheng, Kuan-Wei Ho, Tsu-Jui Fu, Chun-Yi Lee

The additional depth prediction path supplements the relationship prediction model in a way that bounding boxes or segmentation masks are unable to deliver.

Depth Estimation Depth Prediction +3

Dynamic Video Segmentation Network

no code implementations CVPR 2018 Yu-Syuan Xu, Tsu-Jui Fu, Hsuan-Kung Yang, Chun-Yi Lee

We explore the use of a decision network to adaptively assign different frame regions to different networks based on a metric called expected confidence score.

Video Segmentation Video Semantic Segmentation

Virtual-to-Real: Learning to Control in Visual Semantic Segmentation

no code implementations1 Feb 2018 Zhang-Wei Hong, Chen Yu-Ming, Shih-Yang Su, Tzu-Yun Shann, Yi-Hsiang Chang, Hsuan-Kung Yang, Brian Hsi-Lin Ho, Chih-Chieh Tu, Yueh-Chuan Chang, Tsu-Ching Hsiao, Hsin-Wei Hsiao, Sih-Pin Lai, Chun-Yi Lee

Collecting training data from the physical world is usually time-consuming and even dangerous for fragile robots, and thus, recent advances in robot learning advocate the use of simulators as the training platform.

Image Segmentation Semantic Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.