Paper

Towards Sharper Utility Bounds for Differentially Private Pairwise Learning

Pairwise learning focuses on learning tasks with pairwise loss functions, depends on pairs of training instances, and naturally fits for modeling relationships between pairs of samples. In this paper, we focus on the privacy of pairwise learning and propose a new differential privacy paradigm for pairwise learning, based on gradient perturbation. Except for the privacy guarantees, we also analyze the excess population risk and give corresponding bounds under both expectation and high probability conditions. We use the \textit{on-average stability} and the \textit{pairwise locally elastic stability} theories to analyze the expectation bound and the high probability bound, respectively. Moreover, our analyzed utility bounds do not require convex pairwise loss functions, which means that our method is general to both convex and non-convex conditions. Under these circumstances, the utility bounds are similar to (or better than) previous bounds under convexity or strongly convexity assumption, which are attractive results.

Results in Papers With Code
(↓ scroll down to see all results)