Search Results for author: Chengyang Ying

Found 4 papers, 0 papers with code

Understanding Adversarial Attacks on Observations in Deep Reinforcement Learning

no code implementations30 Jun 2021 You Qiaoben, Chengyang Ying, Xinning Zhou, Hang Su, Jun Zhu, Bo Zhang

In this paper, we provide a framework to better understand the existing methods by reformulating the problem of adversarial attacks on reinforcement learning in the function space.

Strategically-timed State-Observation Attacks on Deep Reinforcement Learning Agents

no code implementations ICML Workshop AML 2021 You Qiaoben, Xinning Zhou, Chengyang Ying, Jun Zhu

Deep reinforcement learning (DRL) policies are vulnerable to the adversarial attack on their observations, which may mislead real-world RL agents to catastrophic failures.

Adversarial Attack Continuous Control

Towards Safe Reinforcement Learning via Constraining Conditional Value at Risk

no code implementations ICML Workshop AML 2021 Chengyang Ying, Xinning Zhou, Dong Yan, Jun Zhu

Though deep reinforcement learning (DRL) has obtained substantial success, it may encounter catastrophic failures due to the intrinsic uncertainty caused by stochastic policies and environment variability.

Continuous Control Safe Reinforcement Learning

Analysis of Alignment Phenomenon in Simple Teacher-student Networks with Finite Width

no code implementations1 Jan 2021 Hanlin Zhu, Chengyang Ying, Song Zuo

Recent theoretical analysis suggests that ultra-wide neural networks always converge to global minima near the initialization under first order methods.

Cannot find the paper you are looking for? You can Submit a new open access paper.