Search Results for author: Yen-ju Chen

Found 4 papers, 0 papers with code

Accelerated Policy Gradient: On the Convergence Rates of the Nesterov Momentum for Reinforcement Learning

no code implementations18 Oct 2023 Yen-ju Chen, Nai-Chieh Huang, Ping-Chun Hsieh

In response to this gap, we adapt the celebrated Nesterov's accelerated gradient (NAG) method to policy optimization in RL, termed \textit{Accelerated Policy Gradient} (APG).

Policy Gradient Methods reinforcement-learning +1

Modelling Human Visual Motion Processing with Trainable Motion Energy Sensing and a Self-attention Network

no code implementations16 May 2023 Zitang Sun, Yen-ju Chen, Yung-hao Yang, Shin'ya Nishida

This model architecture aims to capture the computations in V1-MT, the core structure for motion perception in the biological visual system, while providing the ability to derive informative motion flow for a wide range of stimuli, including complex natural scenes.

Optical Flow Estimation

Coordinate Ascent for Off-Policy RL with Global Convergence Guarantees

no code implementations10 Dec 2022 Hsin-En Su, Yen-ju Chen, Ping-Chun Hsieh, Xi Liu

In this paper, we rethink off-policy learning via Coordinate Ascent Policy Optimization (CAPO), an off-policy actor-critic algorithm that decouples policy improvement from the state distribution of the behavior policy without using the policy gradient.

counterfactual

Cannot find the paper you are looking for? You can Submit a new open access paper.