Tuning the hyperparameters of differentially private (DP) machine learning (ML) algorithms often requires use of sensitive data and this may leak private information via hyperparameter values.
In recent years, local differential privacy (LDP) has emerged as a technique of choice for privacy-preserving data collection in several scenarios when the aggregator is not trustworthy.
no code implementations • 3 Nov 2020 • Markus Wulfmeier, Arunkumar Byravan, Tim Hertweck, Irina Higgins, Ankush Gupta, tejas kulkarni, Malcolm Reynolds, Denis Teplyashin, Roland Hafner, Thomas Lampe, Martin Riedmiller
Furthermore, the value of each representation is evaluated in terms of three properties: dimensionality, observability and disentanglement.
Generalized linear models (GLMs) such as logistic regression are among the most widely used arms in data analyst's repertoire and often used on sensitive datasets.
In this paper, we study the problem of computing $U$-statistics of degree $2$, i. e., quantities that come in the form of averages over pairs of data points, in the local model of differential privacy (LDP).
We investigate using reinforcement learning agents as generative models of images (extending arXiv:1804. 01118).
In this work we aim to learn object representations that are useful for control and reinforcement learning (RL).
Exploration in environments with sparse rewards is a key challenge for reinforcement learning.
In this work, we study the setting in which an agent must learn to generate programs for diverse scenes conditioned on a given symbolic instruction.
Learning to control an environment without hand-crafted rewards or expert data remains challenging and is at the frontier of reinforcement learning research.
We introduce a neural network architecture and a learning algorithm to produce factorized symbolic representations.
We evaluate our approach on two game worlds, comparing against baselines using bag-of-words and bag-of-bigrams for state representations.