no code implementations • 10 May 2022 • Benjamin Gravell, Iman Shames, Tyler Summers
We propose a robust data-driven output feedback control algorithm that explicitly incorporates inherent finite-sample model estimate uncertainties into the control design.
no code implementations • 31 Mar 2022 • Benjamin Gravell, Matilde Gargiani, John Lygeros, Tyler H. Summers
We propose a policy iteration algorithm for solving the multiplicative noise linear quadratic output feedback design problem.
1 code implementation • 5 Jan 2022 • Venkatraman Renganathan, Sleiman Safaoui, Aadi Kothari, Benjamin Gravell, Iman Shames, Tyler Summers
Robust autonomy stacks require tight integration of perception, motion planning, and control layers, but these layers often inadequately incorporate inherent perception and prediction uncertainties, either ignoring them altogether or making questionable assumptions of Gaussianity.
no code implementations • 30 Jun 2021 • Yu Xing, Benjamin Gravell, Xingkang He, Karl Henrik Johansson, Tyler Summers
An algorithm based on the least-squares method and multiple-trajectory data is proposed for joint estimation of the nominal system matrices and the covariance matrix of the multiplicative noise.
1 code implementation • 28 Nov 2020 • Benjamin Gravell, Iman Shames, Tyler Summers
We present a midpoint policy iteration algorithm to solve linear quadratic optimal control problems in both model-based and model-free settings.
1 code implementation • L4DC 2020 • Benjamin Gravell, Tyler Summers
Despite decades of research and recent progress in adaptive control and reinforcement learning, there remains a fundamental lack of understanding in designing controllers that provide robustness to inherent non-asymptotic uncertainties arising from models estimated with finite, noisy data.
1 code implementation • 28 May 2019 • Benjamin Gravell, Yi Guo, Tyler Summers
We give algorithms for designing near-optimal sparse controllers using policy gradient with applications to control of systems corrupted by multiplicative noise, which is increasingly important in emerging complex dynamical networks.
1 code implementation • 28 May 2019 • Benjamin Gravell, Peyman Mohajerin Esfahani, Tyler Summers
The linear quadratic regulator (LQR) problem has reemerged as an important theoretical benchmark for reinforcement learning-based control of complex dynamical systems with continuous state and action spaces.