Federated Learning (FL) trains a machine learning model on distributed clients without exposing individual data.
Such attacks are a major threat to models deployed in the physical world, as they can be easily realized by presenting a customized object in the camera view.
Previous studies focus on the "symptoms" directly, as they try to improve the accuracy or detect possible attacks by adding extra steps to conventional FL models.
In this work, we focus on unsupervised continual learning (UCL), where we learn the feature representations on an unlabelled sequence of tasks and show that reliance on annotated data is not necessary for continual learning.
The knowledge of a deep learning model may be transferred to a student model, leading to intellectual property infringement or vulnerability propagation.
The core of the attack is a neural conditional branch constructed with a trigger detector and several operators and injected into the victim model as a malicious payload.
To facilitate future research, we have publicly released all the well-labelled COVID-19 themed apps (and malware) to the research community.
Cryptography and Security