As our main contribution, we prove that the generalization error of a stochastic optimization algorithm can be bounded based on the `complexity' of the fractal structure that underlies its invariant measure.
We further demonstrate that adding Gaussian noise to the input of a VAE allows us to more finely control the frequency content and the Lipschitz constant of the VAE encoder networks.
We introduce an approach for training Variational Autoencoders (VAEs) that are certifiably robust to adversarial attack.
In this paper we focus on the so-called `implicit effect' of GNIs, which is the effect of the injected noise on the dynamics of SGD.
We make inroads into understanding the robustness of Variational Autoencoders (VAEs) to adversarial attacks and other input perturbations.
Separating high-dimensional data like images into independent latent factors, i. e independent component analysis (ICA), remains an open research problem.
We make significant advances in addressing this issue by introducing methods for producing adversarially robust VAEs.