Search Results for author: Mijung Park

Found 25 papers, 9 papers with code

Differentially Private Data Generation Needs Better Features

no code implementations25 May 2022 Fredrik Harder, Milad Jalali Asadabadi, Danica J. Sutherland, Mijung Park

Training even moderately-sized generative models with differentially-private stochastic gradient descent (DP-SGD) is difficult: the required level of noise for reasonable levels of privacy is simply too large.

Transfer Learning

Differentially private stochastic expectation propagation (DP-SEP)

no code implementations25 Nov 2021 Margarita Vinaroz, Mijung Park

We provide a theoretical analysis of the privacy-accuracy trade-off in the posterior estimates under our method, called differentially private stochastic expectation propagation (DP-SEP).

Variational Inference

Hermite Polynomial Features for Private Data Generation

1 code implementation9 Jun 2021 Margarita Vinaroz, Mohammad-Amin Charusaie, Frederik Harder, Kamil Adamczewski, Mijung Park

Hence, a relatively low order of Hermite polynomial features can more accurately approximate the mean embedding of the data distribution compared to a significantly higher number of random features.

Dirichlet Pruning for Neural Network Compression

1 code implementation10 Nov 2020 Kamil Adamczewski, Mijung Park

We introduce Dirichlet pruning, a novel post-processing technique to transform a large neural network model into a compressed one.

Neural Network Compression Variational Inference

Bayesian Importance of Features (BIF)

no code implementations26 Oct 2020 Kamil Adamczewski, Frederik Harder, Mijung Park

We introduce a simple and intuitive framework that provides quantitative explanations of statistical models through the probabilistic assessment of input feature importance.

Bayesian Inference BIG-bench Machine Learning +2

DP-MERF: Differentially Private Mean Embeddings with Random Features for Practical Privacy-Preserving Data Generation

1 code implementation26 Feb 2020 Frederik Harder, Kamil Adamczewski, Mijung Park

We propose a differentially private data generation paradigm using random feature representations of kernel mean embeddings when comparing the distribution of true data with that of synthetic data.

Privacy Preserving Synthetic Data Generation

DP-MAC: The Differentially Private Method of Auxiliary Coordinates for Deep Learning

1 code implementation15 Oct 2019 Frederik Harder, Jonas Köhler, Max Welling, Mijung Park

Developing a differentially private deep learning algorithm is challenging, due to the difficulty in analyzing the sensitivity of objective functions that are typically used to train deep neural networks.

ABCDP: Approximate Bayesian Computation with Differential Privacy

no code implementations11 Oct 2019 Mijung Park, Margarita Vinaroz, Wittawat Jitkrittum

SVT incurs the privacy cost only when a condition (whether a quantity of interest is above/below a threshold) is met.

Privacy Preserving

Neuron ranking -- an informed way to condense convolutional neural networks architecture

no code implementations3 Jul 2019 Kamil Adamczewski, Mijung Park

Convolutional neural networks (CNNs) in recent years have made a dramatic impact in science, technology and industry, yet the theoretical mechanism of CNN architecture design remains surprisingly vague.

Science / Technology Variational Inference

Interpretable and Differentially Private Predictions

1 code implementation5 Jun 2019 Frederik Harder, Matthias Bauer, Mijung Park

Interpretable predictions, where it is clear why a machine learning model has made a particular decision, can compromise privacy by revealing the characteristics of individual data points.

General Classification

Privacy-Preserving Causal Inference via Inverse Probability Weighting

no code implementations29 May 2019 Si Kai Lee, Luigi Gresele, Mijung Park, Krikamol Muandet

The use of inverse probability weighting (IPW) methods to estimate the causal effect of treatments from observational studies is widespread in econometrics, medicine and social sciences.

Causal Inference Econometrics +1

Radial and Directional Posteriors for Bayesian Neural Networks

2 code implementations7 Feb 2019 Changyong Oh, Kamil Adamczewski, Mijung Park

We propose a new variational family for Bayesian neural networks.

A Differentially Private Kernel Two-Sample Test

1 code implementation1 Aug 2018 Anant Raj, Ho Chung Leon Law, Dino Sejdinovic, Mijung Park

As a result, a simple chi-squared test is obtained, where a test statistic depends on a mean and covariance of empirical differences between the samples, which we perturb for a privacy guarantee.

Two-sample testing

Variational Bayes In Private Settings (VIPS)

1 code implementation1 Nov 2016 Mijung Park, James Foulds, Kamalika Chaudhuri, Max Welling

Many applications of Bayesian data analysis involve sensitive information, motivating methods which ensure that privacy is protected.

Bayesian Inference Data Augmentation +1

Private Topic Modeling

no code implementations14 Sep 2016 Mijung Park, James Foulds, Kamalika Chaudhuri, Max Welling

We develop a privatised stochastic variational inference method for Latent Dirichlet Allocation (LDA).

Variational Inference

A note on privacy preserving iteratively reweighted least squares

no code implementations24 May 2016 Mijung Park, Max Welling

In particular, IRLS for L1 minimisation under the linear model provides a closed-form solution in each step, which is a simple multiplication between the inverse of the weighted second moment matrix and the weighted first moment vector.

Privacy Preserving

DP-EM: Differentially Private Expectation Maximization

1 code implementation23 May 2016 Mijung Park, Jimmy Foulds, Kamalika Chaudhuri, Max Welling

The iterative nature of the expectation maximization (EM) algorithm presents a challenge for privacy-preserving estimation, as each iteration increases the amount of noise needed.

Privacy Preserving

K2-ABC: Approximate Bayesian Computation with Kernel Embeddings

no code implementations9 Feb 2015 Mijung Park, Wittawat Jitkrittum, Dino Sejdinovic

Complicated generative models often result in a situation where computing the likelihood of observed data is intractable, while simulating from the conditional density given a parameter value is relatively easy.

Sparse Bayesian structure learning with “dependent relevance determination” priors

no code implementations NeurIPS 2014 Anqi Wu, Mijung Park, Oluwasanmi O. Koyejo, Jonathan W. Pillow

Classical sparse regression methods, such as the lasso and automatic relevance determination (ARD), model parameters as independent a priori, and therefore do not exploit such dependencies.

Bayesian Manifold Learning: The Locally Linear Latent Variable Model (LL-LVM)

no code implementations NeurIPS 2015 Mijung Park, Wittawat Jitkrittum, Ahmad Qamar, Zoltan Szabo, Lars Buesing, Maneesh Sahani

We introduce the Locally Linear Latent Variable Model (LL-LVM), a probabilistic model for non-linear manifold discovery that describes a joint distribution over observations, their manifold coordinates and locally linear maps conditioned on a set of neighbourhood relationships.

Hierarchical models for neural population dynamics in the presence of non-stationarity

no code implementations12 Oct 2014 Mijung Park, Jakob H. Macke

Here, we introduce a hierarchical statistical model of neural population activity which models both neural population dynamics as well as inter-trial modulations in firing rates.

Variational Inference

Bayesian inference for low rank spatiotemporal neural receptive fields

no code implementations NeurIPS 2013 Mijung Park, Jonathan W. Pillow

In typical experiments with naturalistic or flickering spatiotemporal stimuli, RFs are very high-dimensional, due to the large number of coefficients needed to specify an integration profile across time and space.

Bayesian Inference

Bayesian active learning with localized priors for fast receptive field characterization

no code implementations NeurIPS 2012 Mijung Park, Jonathan W. Pillow

Active learning can substantially improve the yield of neurophysiology experiments by adaptively selecting stimuli to probe a neuron's receptive field (RF) in real time.

Active Learning

Active learning of neural response functions with Gaussian processes

no code implementations NeurIPS 2011 Mijung Park, Greg Horwitz, Jonathan W. Pillow

With simulated experiments, we show that optimal design substantially reduces the amount of data required to estimate this nonlinear combination rule.

Active Learning Experimental Design +1

Cannot find the paper you are looking for? You can Submit a new open access paper.