Search Results for author: Baiyu Peng

Found 6 papers, 0 papers with code

Positive-Unlabeled Constraint Learning (PUCL) for Inferring Nonlinear Continuous Constraints Functions from Expert Demonstrations

no code implementations3 Aug 2024 Baiyu Peng, Aude Billard

Within our framework, we treat all data in demonstrations as positive (feasible) data, and learn a control policy to generate potentially infeasible trajectories, which serve as unlabeled data.

Learning General Continuous Constraint from Demonstrations via Positive-Unlabeled Learning

no code implementations23 Jul 2024 Baiyu Peng, Aude Billard

Planning for a wide range of real-world tasks necessitates to know and write all constraints.

Separated Proportional-Integral Lagrangian for Chance Constrained Reinforcement Learning

no code implementations17 Feb 2021 Baiyu Peng, Yao Mu, Jingliang Duan, Yang Guan, Shengbo Eben Li, Jianyu Chen

Taking a control perspective, we first interpret the penalty method and the Lagrangian method as proportional feedback and integral feedback control, respectively.

Autonomous Driving reinforcement-learning +2

Mixed Reinforcement Learning with Additive Stochastic Uncertainty

no code implementations28 Feb 2020 Yao Mu, Shengbo Eben Li, Chang Liu, Qi Sun, Bingbing Nie, Bo Cheng, Baiyu Peng

This paper presents a mixed reinforcement learning (mixed RL) algorithm by simultaneously using dual representations of environmental dynamics to search the optimal policy with the purpose of improving both learning accuracy and training speed.

reinforcement-learning Reinforcement Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.