Search Results for author: Yao Mu

Found 10 papers, 0 papers with code

Flow-based Recurrent Belief State Learning for POMDPs

no code implementations23 May 2022 Xiaoyu Chen, Yao Mu, Ping Luo, Shengbo Li, Jianyu Chen

Furthermore, we show that the learned belief states can be plugged into downstream RL algorithms to improve performance.

Decision Making Variational Inference

Scale-Equivalent Distillation for Semi-Supervised Object Detection

no code implementations23 Mar 2022 Qiushan Guo, Yao Mu, Jianyu Chen, Tianqi Wang, Yizhou Yu, Ping Luo

Further, we overcome these challenges by introducing a novel approach, Scale-Equivalent Distillation (SED), which is a simple yet effective end-to-end knowledge distillation framework robust to large object size variance and class imbalance.

Knowledge Distillation Object Detection +1

Separated Proportional-Integral Lagrangian for Chance Constrained Reinforcement Learning

no code implementations17 Feb 2021 Baiyu Peng, Yao Mu, Jingliang Duan, Yang Guan, Shengbo Eben Li, Jianyu Chen

Taking a control perspective, we first interpret the penalty method and the Lagrangian method as proportional feedback and integral feedback control, respectively.

Autonomous Driving reinforcement-learning

Steadily Learn to Drive with Virtual Memory

no code implementations16 Feb 2021 Yuhang Zhang, Yao Mu, Yujie Yang, Yang Guan, Shengbo Eben Li, Qi Sun, Jianyu Chen

Reinforcement learning has shown great potential in developing high-level autonomous driving.

Autonomous Driving

Robust Memory Augmentation by Constrained Latent Imagination

no code implementations1 Jan 2021 Yao Mu, Yuzheng Zhuang, Bin Wang, Wulong Liu, Shengbo Eben Li, Jianye Hao

The latent dynamics model summarizes an agent’s high dimensional experiences in a compact way.

Model-Based Actor-Critic with Chance Constraint for Stochastic System

no code implementations19 Dec 2020 Baiyu Peng, Yao Mu, Yang Guan, Shengbo Eben Li, Yuming Yin, Jianyu Chen

Safety is essential for reinforcement learning (RL) applied in real-world situations.

Mixed Reinforcement Learning with Additive Stochastic Uncertainty

no code implementations28 Feb 2020 Yao Mu, Shengbo Eben Li, Chang Liu, Qi Sun, Bingbing Nie, Bo Cheng, Baiyu Peng

This paper presents a mixed reinforcement learning (mixed RL) algorithm by simultaneously using dual representations of environmental dynamics to search the optimal policy with the purpose of improving both learning accuracy and training speed.


Cannot find the paper you are looking for? You can Submit a new open access paper.