Adversarial Vulnerability of Active Transfer Learning

26 Jan 2021  ·  Nicolas M. Müller, Konstantin Böttinger ·

Two widely used techniques for training supervised machine learning models on small datasets are Active Learning and Transfer Learning. The former helps to optimally use a limited budget to label new data. The latter uses large pre-trained models as feature extractors and enables the design of complex, non-linear models even on tiny datasets. Combining these two approaches is an effective, state-of-the-art method when dealing with small datasets. In this paper, we share an intriguing observation: Namely, that the combination of these techniques is particularly susceptible to a new kind of data poisoning attack: By adding small adversarial noise on the input, it is possible to create a collision in the output space of the transfer learner. As a result, Active Learning algorithms no longer select the optimal instances, but almost exclusively the ones injected by the attacker. This allows an attacker to manipulate the active learner to select and include arbitrary images into the data set, even against an overwhelming majority of unpoisoned samples. We show that a model trained on such a poisoned dataset has a significantly deteriorated performance, dropping from 86\% to 34\% test accuracy. We evaluate this attack on both audio and image datasets and support our findings empirically. To the best of our knowledge, this weakness has not been described before in literature.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here