In this paper, we introduce theoretically-motivated measures to quantify information leakages in both attack-dependent and attack-independent manners.
We propose and implement a Privacy-preserving Federated Learning ($PPFL$) framework for mobile systems to limit privacy leakages in federated learning.
Training deep neural networks via federated learning allows clients to share, instead of the original data, only the model trained on their data.
We present DarkneTZ, a framework that uses an edge device's Trusted Execution Environment (TEE) in conjunction with model partitioning to limit the attack surface against Deep Neural Networks (DNNs).
Pre-trained Deep Neural Network (DNN) models are increasingly used in smartphones and other user devices to enable prediction services, leading to potential disclosures of (sensitive) information from training data captured inside these models.
Besides this, at most of time, ASR system is used to deal with real-time problem such as keyword spotting (KWS).