1 code implementation • 5 Jun 2019 • Frederik Harder, Matthias Bauer, Mijung Park
Interpretable predictions, where it is clear why a machine learning model has made a particular decision, can compromise privacy by revealing the characteristics of individual data points.
1 code implementation • 15 Oct 2019 • Frederik Harder, Jonas Köhler, Max Welling, Mijung Park
Developing a differentially private deep learning algorithm is challenging, due to the difficulty in analyzing the sensitivity of objective functions that are typically used to train deep neural networks.
1 code implementation • 26 Feb 2020 • Frederik Harder, Kamil Adamczewski, Mijung Park
We propose a differentially private data generation paradigm using random feature representations of kernel mean embeddings when comparing the distribution of true data with that of synthetic data.
no code implementations • 26 Oct 2020 • Kamil Adamczewski, Frederik Harder, Mijung Park
We introduce a simple and intuitive framework that provides quantitative explanations of statistical models through the probabilistic assessment of input feature importance.
1 code implementation • 9 Jun 2021 • Margarita Vinaroz, Mohammad-Amin Charusaie, Frederik Harder, Kamil Adamczewski, Mijung Park
Hence, a relatively low order of Hermite polynomial features can more accurately approximate the mean embedding of the data distribution compared to a significantly higher number of random features.