Search Results for author: Alexej Klushyn

Found 9 papers, 0 papers with code

Latent Matters: Learning Deep State-Space Models

no code implementations NeurIPS 2021 Alexej Klushyn, Richard Kurle, Maximilian Soelch, Botond Cseke, Patrick van der Smagt

Our results show that the constrained optimisation framework significantly improves system identification and prediction accuracy on the example of established state-of-the-art DSSMs.

Variational Inference

Continual Learning with Bayesian Neural Networks for Non-Stationary Data

no code implementations ICLR 2020 Richard Kurle, Botond Cseke, Alexej Klushyn, Patrick van der Smagt, Stephan Günnemann

We represent the posterior approximation of the network weights by a diagonal Gaussian distribution and a complementary memory of raw data.

Continual Learning

Learning Flat Latent Manifolds with VAEs

no code implementations ICML 2020 Nutan Chen, Alexej Klushyn, Francesco Ferroni, Justin Bayer, Patrick van der Smagt

Prevalent is the use of the Euclidean metric, which has the drawback of ignoring information about similarity of data stored in the decoder, as captured by the framework of Riemannian geometry.

Computational Efficiency

FLAT MANIFOLD VAES

no code implementations25 Sep 2019 Nutan Chen, Alexej Klushyn, Francesco Ferroni, Justin Bayer, Patrick van der Smagt

Latent-variable models represent observed data by mapping a prior distribution over some latent space to an observed space.

Increasing the Generalisation Capacity of Conditional VAEs

no code implementations23 Aug 2019 Alexej Klushyn, Nutan Chen, Botond Cseke, Justin Bayer, Patrick van der Smagt

We address the problem of one-to-many mappings in supervised learning, where a single instance has many different solutions of possibly equal cost.

Structured Prediction

Learning Hierarchical Priors in VAEs

no code implementations NeurIPS 2019 Alexej Klushyn, Nutan Chen, Richard Kurle, Botond Cseke, Patrick van der Smagt

We propose to learn a hierarchical prior in the context of variational autoencoders to avoid the over-regularisation resulting from a standard normal prior distribution.

Fast Approximate Geodesics for Deep Generative Models

no code implementations19 Dec 2018 Nutan Chen, Francesco Ferroni, Alexej Klushyn, Alexandros Paraschos, Justin Bayer, Patrick van der Smagt

The length of the geodesic between two data points along a Riemannian manifold, induced by a deep generative model, yields a principled measure of similarity.

Active Learning based on Data Uncertainty and Model Sensitivity

no code implementations6 Aug 2018 Nutan Chen, Alexej Klushyn, Alexandros Paraschos, Djalel Benbouzid, Patrick van der Smagt

It relies on the Jacobian of the likelihood to detect non-smooth transitions in the latent space, i. e., transitions that lead to abrupt changes in the movement of the robot.

Active Learning Metric Learning

Metrics for Deep Generative Models

no code implementations3 Nov 2017 Nutan Chen, Alexej Klushyn, Richard Kurle, Xueyan Jiang, Justin Bayer, Patrick van der Smagt

Neural samplers such as variational autoencoders (VAEs) or generative adversarial networks (GANs) approximate distributions by transforming samples from a simple random source---the latent space---to samples from a more complex distribution represented by a dataset.

Cannot find the paper you are looking for? You can Submit a new open access paper.