Transferring knowledge from one domain to another is of practical importance for many tasks in natural language processing, especially when the amount of available data in the target domain is limited.
Documents as short as a single sentence may inadvertently reveal sensitive information about their authors, including e. g. their gender or ethnicity.
We propose a new method of program learning in a Domain Specific Language (DSL) which is based on gradient descent with no direct search.
Machine Learning approaches to Natural Language Processing tasks benefit from a comprehensive collection of real-life user data.
Differentially private stochastic gradient descent (DPSGD) is a variation of stochastic gradient descent based on the Differential Privacy (DP) paradigm, which can mitigate privacy threats that arise from the presence of sensitive information in training data.