no code implementations • 5 Dec 2023 • Wenqian Xue, Yi Jiang, Frank L. Lewis, Bosen Lian
This paper formulates a stochastic optimal control problem for linear networked control systems featuring stochastic packet disordering with a unique stabilizing solution certified.
no code implementations • 7 Nov 2023 • Honghui Wu, Ahmet Taha Koru, Guanxuan Wu, Frank L. Lewis, Hai Lin
The structural balance of a signed graph is known to be necessary and sufficient to obtain a bipartite consensus among agents with friend-foe relationships.
no code implementations • 5 Jan 2023 • Wenqian Xue, Bosen Lian, Jialu Fan, Tianyou Chai, Frank L. Lewis
In this paper, we formulate inverse reinforcement learning (IRL) as an expert-learner interaction whereby the optimal performance intent of an expert or target agent is unknown to a learner agent.
no code implementations • 23 Oct 2022 • Hefu Ye, Yongduan Song, Frank L. Lewis
Prescribed-time (PT) control, originated from \textit{Song et al.}, has gained increasing attention among control community.
no code implementations • 29 Dec 2021 • Shimin Wang, Xiangyu Meng, Hongwei Zhang, Frank L. Lewis
This paper proposes a learning-based fully distributed observer for a class of nonlinear leader systems, which can simultaneously learn the leader's dynamics and states.
no code implementations • 22 Jan 2020 • Patrik Kolaric, Devesh K. Jha, Arvind U. Raghunathan, Frank L. Lewis, Mouhacine Benosman, Diego Romeres, Daniel Nikovski
Motivated by these problems, we try to formulate the problem of trajectory optimization and local policy synthesis as a single optimization problem.
Model-based Reinforcement Learning
reinforcement-learning
+2