Defending via strategic ML selection

16 Jan 2019  ·  Eitan Farchi, Onn Shehory, Guy Barash ·

The results of a learning process depend on the input data. There are cases in which an adversary can strategically tamper with the input data to affect the outcome of the learning process. While some datasets are difficult to attack, many others are susceptible to manipulation. A resourceful attacker can tamper with large portions of the dataset and affect them. An attacker can additionally strategically focus on a preferred subset of the attributes in the dataset to maximize the effectiveness of the attack and minimize the resources allocated to data manipulation. In light of this vulnerability, we introduce a solution according to which the defender implements an array of learners, and their activation is performed strategically. The defender computes the (game theoretic) strategy space and accordingly applies a dominant strategy where possible, and a Nash-stable strategy otherwise. In this paper we provide the details of this approach. We analyze Nash equilibrium in such a strategic learning environment, and demonstrate our solution by specific examples.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here