Efficient privacy-preserving inference for convolutional neural networks

15 Oct 2021  ·  Han Xuanyuan, Francisco Vargas, Stephen Cummins ·

The processing of sensitive user data using deep learning models is an area that has gained recent traction. Existing work has leveraged homomorphic encryption (HE) schemes to enable computation on encrypted data. An early work was CryptoNets, which takes 250 seconds for one MNIST inference. The main limitation of such approaches is that of the expensive FFT-like operations required to perform operations on HE-encrypted ciphertext. Others have proposed the use of model pruning and efficient data representations to reduce the number of HE operations required. We focus on improving upon existing work by proposing changes to the representations of intermediate tensors during CNN inference. We construct and evaluate private CNNs on the MNIST and CIFAR-10 datasets, and achieve over a two-fold reduction in the number of operations used for inferences of the CryptoNets architecture.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods