Paper

Distributed Layer-Partitioned Training for Privacy-Preserved Deep Learning

Deep Learning techniques have achieved remarkable results in many domains. Often, training deep learning models requires large datasets, which may require sensitive information to be uploaded to the cloud to accelerate training. To adequately protect sensitive information, we propose distributed layer-partitioned training with step-wise activation functions for privacy-preserving deep learning. Experimental results attest our method to be simple and effective.

Results in Papers With Code
(↓ scroll down to see all results)