no code implementations • NeurIPS 2021 • Alexej Klushyn, Richard Kurle, Maximilian Soelch, Botond Cseke, Patrick van der Smagt
Our results show that the constrained optimisation framework significantly improves system identification and prediction accuracy on the example of established state-of-the-art DSSMs.
no code implementations • ICLR 2020 • Richard Kurle, Botond Cseke, Alexej Klushyn, Patrick van der Smagt, Stephan Günnemann
We represent the posterior approximation of the network weights by a diagonal Gaussian distribution and a complementary memory of raw data.
no code implementations • ICML 2020 • Nutan Chen, Alexej Klushyn, Francesco Ferroni, Justin Bayer, Patrick van der Smagt
Prevalent is the use of the Euclidean metric, which has the drawback of ignoring information about similarity of data stored in the decoder, as captured by the framework of Riemannian geometry.
no code implementations • 25 Sep 2019 • Nutan Chen, Alexej Klushyn, Francesco Ferroni, Justin Bayer, Patrick van der Smagt
Latent-variable models represent observed data by mapping a prior distribution over some latent space to an observed space.
no code implementations • 23 Aug 2019 • Alexej Klushyn, Nutan Chen, Botond Cseke, Justin Bayer, Patrick van der Smagt
We address the problem of one-to-many mappings in supervised learning, where a single instance has many different solutions of possibly equal cost.
no code implementations • NeurIPS 2019 • Alexej Klushyn, Nutan Chen, Richard Kurle, Botond Cseke, Patrick van der Smagt
We propose to learn a hierarchical prior in the context of variational autoencoders to avoid the over-regularisation resulting from a standard normal prior distribution.
no code implementations • 19 Dec 2018 • Nutan Chen, Francesco Ferroni, Alexej Klushyn, Alexandros Paraschos, Justin Bayer, Patrick van der Smagt
The length of the geodesic between two data points along a Riemannian manifold, induced by a deep generative model, yields a principled measure of similarity.
no code implementations • 6 Aug 2018 • Nutan Chen, Alexej Klushyn, Alexandros Paraschos, Djalel Benbouzid, Patrick van der Smagt
It relies on the Jacobian of the likelihood to detect non-smooth transitions in the latent space, i. e., transitions that lead to abrupt changes in the movement of the robot.
no code implementations • 3 Nov 2017 • Nutan Chen, Alexej Klushyn, Richard Kurle, Xueyan Jiang, Justin Bayer, Patrick van der Smagt
Neural samplers such as variational autoencoders (VAEs) or generative adversarial networks (GANs) approximate distributions by transforming samples from a simple random source---the latent space---to samples from a more complex distribution represented by a dataset.