Synthesis of Maximally Permissive Covert Attackers Against Unknown Supervisors by Using Observations

23 Jun 2021  ·  Ruochen Tai, Liyong Lin, Yuting Zhu, Rong Su ·

In this paper, we consider the problem of synthesis of maximally permissive covert damage-reachable attackers in the setup where the model of the supervisor is unknown to the adversary but the adversary has recorded a (prefix-closed) finite set of observations of the runs of the closed-loop system. The synthesized attacker needs to ensure both the damage-reachability and the covertness against all the supervisors which are consistent with the given set of observations. There is a gap between the de facto maximal permissiveness, assuming the model of the supervisor is known, and the maximal permissiveness that can be attained with a limited knowledge of the model of the supervisor, from the adversary's point of view. We consider the setup where the attacker can exercise sensor replacement/deletion attacks and actuator enablement/disablement attacks. The solution methodology proposed in this work is to reduce the synthesis of maximally permissive covert damage-reachable attackers, given the model of the plant and the finite set of observations, to the synthesis of maximally permissive safe supervisors for certain transformed plant, which shows the decidability of the observation-assisted covert attacker synthesis problem. The effectiveness of our approach is illustrated on a water tank example adapted from the literature.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here