Variational autoencoders learn unsupervised data representations, but these models frequently converge to minima that fail to preserve meaningful semantic information.
We present an information-theoretic framework for understanding trade-offs in unsupervised learning of deep latent-variables models using variational inference.
The TensorFlow Distributions library implements a vision of probability theory adapted to the modern deep-learning paradigm of end-to-end differentiable computation.
Graph embedding methods represent nodes in a continuous vector space, preserving information from the graph (e. g. by sampling random walks).
Ranked #62 on Node Classification on Citeseer
In this work, we present stochastic neural network architectures that handle such multimodality through stochasticity: future trajectories of objects, body joints or frames are represented as deep, non-linear transformations of random (as opposed to deterministic) variables.
Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network.
Ranked #376 on Image Classification on ImageNet