Regret Bounds for LQ Adaptive Control Under Database Attacks (Extended Version)

1 Apr 2020  ·  Jafar Abbaszadeh Chekan, Cedric Langbort ·

This paper is concerned with understanding and countering the effects of database attacks on a learning-based linear quadratic adaptive controller. This attack targets neither sensors nor actuators, but just poisons the learning algorithm and parameter estimator that is part of the regulation scheme. We focus on the adaptive optimal control algorithm introduced by Abbasi-Yadkori and Szepesvari and provide regret analysis in the presence of attacks as well as modifications that mitigate their effects. A core step of this algorithm is the self-regularized on-line least squares estimation, which determines a tight confidence set around the true parameters of the system with high probability. In the absence of malicious data injection, this set provides an appropriate estimate of parameters for the aim of control design. However, in the presence of attack, this confidence set is not reliable anymore. Hence, we first tackle the question of how to adjust the confidence set so that it can compensate for the effect of the poisonous data. Then, we quantify the deleterious effect of this type of attack on the optimality of control policy by bounding regret of the closed-loop system under attack.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here