Search Results for author: Chengyang Ying

Found 16 papers, 9 papers with code

Analysis of Alignment Phenomenon in Simple Teacher-student Networks with Finite Width

no code implementations1 Jan 2021 Hanlin Zhu, Chengyang Ying, Song Zuo

Recent theoretical analysis suggests that ultra-wide neural networks always converge to global minima near the initialization under first order methods.

Towards Safe Reinforcement Learning via Constraining Conditional Value at Risk

no code implementations ICML Workshop AML 2021 Chengyang Ying, Xinning Zhou, Dong Yan, Jun Zhu

Though deep reinforcement learning (DRL) has obtained substantial success, it may encounter catastrophic failures due to the intrinsic uncertainty caused by stochastic policies and environment variability.

Continuous Control reinforcement-learning +2

Strategically-timed State-Observation Attacks on Deep Reinforcement Learning Agents

no code implementations ICML Workshop AML 2021 You Qiaoben, Xinning Zhou, Chengyang Ying, Jun Zhu

Deep reinforcement learning (DRL) policies are vulnerable to the adversarial attack on their observations, which may mislead real-world RL agents to catastrophic failures.

Adversarial Attack Continuous Control +2

Understanding Adversarial Attacks on Observations in Deep Reinforcement Learning

no code implementations30 Jun 2021 You Qiaoben, Chengyang Ying, Xinning Zhou, Hang Su, Jun Zhu, Bo Zhang

In this paper, we provide a framework to better understand the existing methods by reformulating the problem of adversarial attacks on reinforcement learning in the function space.

reinforcement-learning Reinforcement Learning (RL)

Towards Safe Reinforcement Learning via Constraining Conditional Value-at-Risk

1 code implementation9 Jun 2022 Chengyang Ying, Xinning Zhou, Hang Su, Dong Yan, Ning Chen, Jun Zhu

Though deep reinforcement learning (DRL) has obtained substantial success, it may encounter catastrophic failures due to the intrinsic uncertainty of both transition and observation.

Continuous Control reinforcement-learning +2

GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing

no code implementations9 Jun 2022 Zhongkai Hao, Chengyang Ying, Yinpeng Dong, Hang Su, Jun Zhu, Jian Song

Under the GSmooth framework, we present a scalable algorithm that uses a surrogate image-to-image network to approximate the complex transformation.

Consistent Attack: Universal Adversarial Perturbation on Embodied Vision Navigation

1 code implementation12 Jun 2022 Chengyang Ying, You Qiaoben, Xinning Zhou, Hang Su, Wenbo Ding, Jianyong Ai

Among different adversarial noises, universal adversarial perturbations (UAP), i. e., a constant image-agnostic perturbation applied on every input frame of the agent, play a critical role in Embodied Vision Navigation since they are computation-efficient and application-practical during the attack.

Bi-level Physics-Informed Neural Networks for PDE Constrained Optimization using Broyden's Hypergradients

no code implementations15 Sep 2022 Zhongkai Hao, Chengyang Ying, Hang Su, Jun Zhu, Jian Song, Ze Cheng

In this paper, we present a novel bi-level optimization framework to resolve the challenge by decoupling the optimization of the targets and constraints.

On the Reuse Bias in Off-Policy Reinforcement Learning

1 code implementation15 Sep 2022 Chengyang Ying, Zhongkai Hao, Xinning Zhou, Hang Su, Dong Yan, Jun Zhu

In this paper, we reveal that the instability is also related to a new notion of Reuse Bias of IS -- the bias in off-policy evaluation caused by the reuse of the replay buffer for evaluation and optimization.

Continuous Control Off-policy evaluation +1

Offline Reinforcement Learning via High-Fidelity Generative Behavior Modeling

1 code implementation29 Sep 2022 Huayu Chen, Cheng Lu, Chengyang Ying, Hang Su, Jun Zhu

To address this problem, we adopt a generative approach by decoupling the learned policy into two parts: an expressive generative behavior model and an action evaluation model.

Computational Efficiency D4RL +4

A Unified Hard-Constraint Framework for Solving Geometrically Complex PDEs

1 code implementation6 Oct 2022 Songming Liu, Zhongkai Hao, Chengyang Ying, Hang Su, Jun Zhu, Ze Cheng

We present a unified hard-constraint framework for solving geometrically complex PDEs with neural networks, where the most commonly used Dirichlet, Neumann, and Robin boundary conditions (BCs) are considered.

Physics-Informed Machine Learning: A Survey on Problems, Methods and Applications

1 code implementation15 Nov 2022 Zhongkai Hao, Songming Liu, Yichi Zhang, Chengyang Ying, Yao Feng, Hang Su, Jun Zhu

Recent work shows that it provides potential benefits for machine learning models by incorporating the physical prior and collected data, which makes the intersection of machine learning and physics become a prevailing paradigm.

Physics-informed machine learning

GNOT: A General Neural Operator Transformer for Operator Learning

2 code implementations28 Feb 2023 Zhongkai Hao, Zhengyi Wang, Hang Su, Chengyang Ying, Yinpeng Dong, Songming Liu, Ze Cheng, Jian Song, Jun Zhu

However, there are several challenges for learning operators in practical applications like the irregular mesh, multiple input functions, and complexity of the PDEs' solution.

Operator learning

Task Aware Dreamer for Task Generalization in Reinforcement Learning

no code implementations9 Mar 2023 Chengyang Ying, Zhongkai Hao, Xinning Zhou, Hang Su, Songming Liu, Dong Yan, Jun Zhu

Extensive experiments in both image-based and state-based tasks show that TAD can significantly improve the performance of handling different tasks simultaneously, especially for those with high TDR, and display a strong generalization ability to unseen tasks.

reinforcement-learning Reinforcement Learning (RL)

NUNO: A General Framework for Learning Parametric PDEs with Non-Uniform Data

1 code implementation30 May 2023 Songming Liu, Zhongkai Hao, Chengyang Ying, Hang Su, Ze Cheng, Jun Zhu

The neural operator has emerged as a powerful tool in learning mappings between function spaces in PDEs.

Operator learning

DPOT: Auto-Regressive Denoising Operator Transformer for Large-Scale PDE Pre-Training

1 code implementation6 Mar 2024 Zhongkai Hao, Chang Su, Songming Liu, Julius Berner, Chengyang Ying, Hang Su, Anima Anandkumar, Jian Song, Jun Zhu

Pre-training has been investigated to improve the efficiency and performance of training neural operators in data-scarce settings.

Denoising

Cannot find the paper you are looking for? You can Submit a new open access paper.