The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution.
By constructing a stack of autoregressive models, each modelling the random numbers of the next model in the stack, we obtain a type of normalizing flow suitable for density estimation, which we call Masked Autoregressive Flow.
The result is a continuous-time invertible generative model with unbiased density estimation and one-pass sampling, while allowing unrestricted neural network architectures.
Besides, our experiments converting CIFAR-10 into a point cloud showed that networks built on PointConv can match the performance of convolutional networks in 2D images of a similar structure.
We show that standard ResNet architectures can be made invertible, allowing the same model to be used for classification, density estimation, and generation.
#3 best model for Image Generation on MNIST
In addition, an ablation study is performed to demonstrate the improvements obtained by different modules in the proposed method.