Semantics-Preserved Distortion for Personal Privacy Protection in Information Management

4 Jan 2022  ·  Jiajia Li, Letian Peng, Ping Wang, Zuchao Li, Xueyi Li, Hai Zhao ·

Although machine learning and especially deep learning methods have played an important role in the field of information management, privacy protection is an important and concerning topic in current machine learning models. In information management field, a large number of texts containing personal information are produced by users every day. As the model training on information from users is likely to invade personal privacy, many methods have been proposed to block the learning and memorizing of the sensitive data in raw texts. In this paper, we try to do this more linguistically via distorting the text while preserving the semantics. In practice, we leverage a recently our proposed metric, Neighboring Distribution Divergence, to evaluate the semantic preservation during the distortion. Based on the metric, we propose two frameworks for semantics-preserved distortion, a generative one and a substitutive one. We conduct experiments on named entity recognition, constituency parsing, and machine reading comprehension tasks. Results from our experiments show the plausibility and efficiency of our distortion as a method for personal privacy protection. Moreover, we also evaluate the attribute attack on three privacy-related tasks in the current natural language processing field, and the results show the simplicity and effectiveness of our data-based improvement approach compared to the structural improvement approach. Further, we also investigate the effects of privacy protection in specific medical information management in this work and show that the medical information pre-training model using our approach can effectively reduce the memory of patients and symptoms, which fully demonstrates the practicality of our approach.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here