GINN: Fast GPU-TEE Based Integrity for Neural Network Training

1 Jan 2021  ·  Aref Asvadishirehjini, Murat Kantarcioglu, Bradley A. Malin ·

Machine learning models based on Deep Neural Networks (DNNs) are increasingly being deployed in a wide range of applications ranging from self-driving cars to Covid-19 diagnostics. The computational power necessary to learn a DNN is non-trivial. So, as a result, cloud environments with dedicated hardware support emerged as important infrastructure. However, outsourcing computation to the cloud raises security, privacy, and integrity challenges. To address these challenges, previous works tried to leverage homomorphic encryption, secure multi-party computation, and trusted execution environments (TEE). Yet, none of these approaches can scale up to support realistic DNN model training workloads with deep architectures and millions of training examples without sustaining a significant performance hit. In this work, we focus on the setting where the integrity of the outsourced Deep Learning (DL) model training is ensured by TEE. We choose the TEE based approach because it has been shown to be more efficient compared to the pure cryptographic solutions, and the availability of TEEs on cloud environments. To mitigate the loss in performance, we combine random verification of selected computation steps with careful adjustments of DNN used for training. Our experimental results show that the proposed approach may achieve 2X to 20X performance improvement compared to the pure TEE based solution while guaranteeing the integrity of the computation with high probability (e.g., 0.999) against the state-of-the-art DNN backdoor attacks.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here