Search Results for author: Alireza Makhzani

Found 12 papers, 9 papers with code

Variational Model Inversion Attacks

1 code implementation NeurIPS 2021 Kuan-Chieh Wang, Yan Fu, Ke Li, Ashish Khisti, Richard Zemel, Alireza Makhzani

In this work, we provide a probabilistic interpretation of model inversion attacks, and formulate a variational objective that accounts for both diversity and accuracy.

Improving Mutual Information Estimation with Annealed and Energy-Based Bounds

no code implementations ICLR 2022 Rob Brekelmans, Sicong Huang, Marzyeh Ghassemi, Greg Ver Steeg, Roger Baker Grosse, Alireza Makhzani

Since naive importance sampling with the marginal density as a proposal requires exponential sample complexity in the true mutual information, we propose novel Multi-Sample Annealed Importance Sampling (AIS) bounds on mutual information.

Mutual Information Estimation

Compressing Multisets with Large Alphabets

1 code implementation15 Jul 2021 Daniel Severo, James Townsend, Ashish Khisti, Alireza Makhzani, Karen Ullrich

Current methods that optimally compress multisets are not suitable for high-dimensional symbols, as their compute time scales linearly with alphabet size.

Improving Lossless Compression Rates via Monte Carlo Bits-Back Coding

1 code implementation ICLR Workshop Neural_Compression 2021 Yangjun Ruan, Karen Ullrich, Daniel Severo, James Townsend, Ashish Khisti, Arnaud Doucet, Alireza Makhzani, Chris J. Maddison

Naively applied, our schemes would require more initial bits than the standard bits-back coder, but we show how to drastically reduce this additional cost with couplings in the latent space.

Data Compression

Likelihood Ratio Exponential Families

no code implementations NeurIPS Workshop DL-IG 2020 Rob Brekelmans, Frank Nielsen, Alireza Makhzani, Aram Galstyan, Greg Ver Steeg

The exponential family is well known in machine learning and statistical physics as the maximum entropy distribution subject to a set of observed constraints, while the geometric mixture path is common in MCMC methods such as annealed importance sampling.

Evaluating Lossy Compression Rates of Deep Generative Models

2 code implementations ICML 2020 Sicong Huang, Alireza Makhzani, Yanshuai Cao, Roger Grosse

The field of deep generative modeling has succeeded in producing astonishingly realistic-seeming images and audio, but quantitative evaluation remains a challenge.

Implicit Autoencoders

no code implementations ICLR 2019 Alireza Makhzani

In this paper, we describe the "implicit autoencoder" (IAE), a generative autoencoder in which both the generative path and the recognition path are parametrized by implicit distributions.

Image-to-Image Translation Translation

PixelGAN Autoencoders

1 code implementation NeurIPS 2017 Alireza Makhzani, Brendan Frey

In this paper, we describe the "PixelGAN autoencoder", a generative autoencoder in which the generative path is a convolutional autoregressive neural network on pixels (PixelCNN) that is conditioned on a latent code, and the recognition path uses a generative adversarial network (GAN) to impose a prior distribution on the latent code.

Unsupervised Image Classification Unsupervised MNIST

Adversarial Autoencoders

26 code implementations18 Nov 2015 Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, Brendan Frey

In this paper, we propose the "adversarial autoencoder" (AAE), which is a probabilistic autoencoder that uses the recently proposed generative adversarial networks (GAN) to perform variational inference by matching the aggregated posterior of the hidden code vector of the autoencoder with an arbitrary prior distribution.

Data Visualization Dimensionality Reduction +4

Winner-Take-All Autoencoders

3 code implementations NeurIPS 2015 Alireza Makhzani, Brendan Frey

In this paper, we propose a winner-take-all method for learning hierarchical sparse representations in an unsupervised fashion.

k-Sparse Autoencoders

2 code implementations19 Dec 2013 Alireza Makhzani, Brendan Frey

Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks.

Classification Denoising +1

Cannot find the paper you are looking for? You can Submit a new open access paper.