U-Net architectures are ubiquitous in state-of-the-art deep learning, however their regularisation properties and relationship to wavelets are understudied.
Surprisingly, we discover side information is not necessary for algorithmic stability: using standard quantitative measures of identifiability, we find deep generative models with latent clusterings are empirically identifiable to the same degree as models which rely on auxiliary labels.
We further demonstrate that adding Gaussian noise to the input of a VAE allows us to more finely control the frequency content and the Lipschitz constant of the VAE encoder networks.
We introduce an approach for training Variational Autoencoders (VAEs) that are certifiably robust to adversarial attack.
We make inroads into understanding the robustness of Variational Autoencoders (VAEs) to adversarial attacks and other input perturbations.
Successfully training Variational Autoencoders (VAEs) with a hierarchy of discrete latent variables remains an area of active research.
Separating high-dimensional data like images into independent latent factors, i. e independent component analysis (ICA), remains an open research problem.
We show that the stochasticity in training ResNets for image classification on GPUs in TensorFlow is dominated by the non-determinism from GPUs, rather than by the initialisation of the weights and biases of the network or by the sequence of minibatches given.
We make significant advances in addressing this issue by introducing methods for producing adversarially robust VAEs.
It could easily be the case that some classes of data are found only in the unlabelled dataset -- perhaps the labelling process was biased -- so we do not have any labelled examples to train on for some classes.
We introduce 'semi-unsupervised learning', a problem regime related to transfer learning and zero-shot learning where, in the training data, some classes are sparsely labelled and others entirely unlabelled.