1 code implementation • 16 Apr 2022 • Daniel Bernau, Jonas Robl, Florian Kerschbaum
We present an approach to quantify and compare the privacy-accuracy trade-off for differentially private Variational Autoencoders.
1 code implementation • 4 Mar 2021 • Dominik Wunderlich, Daniel Bernau, Francesco Aldà, Javier Parra-Arnau, Thorsten Strufe
This work investigates the privacy-utility trade-off in hierarchical text classification with differential privacy guarantees, and identifies neural network architectures that offer superior trade-offs.
2 code implementations • 4 Mar 2021 • Daniel Bernau, Günther Eibl, Philip W. Grassal, Hannah Keller, Florian Kerschbaum
We transform $(\epsilon,\delta)$ to a bound on the Bayesian posterior belief of the adversary assumed by differential privacy concerning the presence of any record in the training dataset.
1 code implementation • 24 Dec 2019 • Daniel Bernau, Philip-William Grassal, Jonas Robl, Florian Kerschbaum
We empirically compare local and central differential privacy mechanisms under white- and black-box membership inference to evaluate their relative privacy-accuracy trade-offs.
1 code implementation • 7 Jun 2019 • Benjamin Hilprecht, Martin Härterich, Daniel Bernau
We present two information leakage attacks that outperform previous work on membership inference against generative models.