no code implementations • NeurIPS 2019 • Rinu Boney, Norman Di Palo, Mathias Berglund, Alexander Ilin, Juho Kannala, Antti Rasmus, Harri Valpola
Trajectory optimization using a learned model of the environment is one of the core elements of model-based reinforcement learning.
no code implementations • 10 Dec 2018 • Norman Di Palo, Harri Valpola
Model based predictions of future trajectories of a dynamical system often suffer from inaccuracies, forcing model based control algorithms to re-plan often, thus being computationally expensive, suboptimal and not reliable.
no code implementations • 6 Sep 2017 • Heikki Arponen, Matti Herranen, Harri Valpola
We prove an exact relationship between the optimal denoising function and the data distribution in the case of additive Gaussian noise, showing that denoising implicitly models the structure of data allowing it to be exploited in the unsupervised learning of representations.
no code implementations • NeurIPS 2017 • Isabeau Prémont-Schwarz, Alexander Ilin, Tele Hotloo Hao, Antti Rasmus, Rinu Boney, Harri Valpola
We propose a recurrent extension of the Ladder networks whose structure is motivated by the inference required in hierarchical latent variable models.
8 code implementations • NeurIPS 2017 • Antti Tarvainen, Harri Valpola
Without changing the network architecture, Mean Teacher achieves an error rate of 4. 35% on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels.
Semi-Supervised Image Classification Semi-Supervised RGBD Semantic Segmentation +1
2 code implementations • NeurIPS 2016 • Klaus Greff, Antti Rasmus, Mathias Berglund, Tele Hotloo Hao, Jürgen Schmidhuber, Harri Valpola
We present a framework for efficient perceptual inference that explicitly reasons about the segmentation of its inputs and features.
10 code implementations • NeurIPS 2015 • Antti Rasmus, Harri Valpola, Mikko Honkala, Mathias Berglund, Tapani Raiko
We combine supervised learning with unsupervised learning in deep neural networks.
1 code implementation • 30 Apr 2015 • Antti Rasmus, Harri Valpola, Tapani Raiko
We show how a deep denoising autoencoder with lateral connections can be used as an auxiliary unsupervised learning task to support supervised learning.
1 code implementation • 22 Dec 2014 • Antti Rasmus, Tapani Raiko, Harri Valpola
Suitable lateral connections between encoder and decoder are shown to allow higher layers of a denoising autoencoder (dAE) to focus on invariant representations.
1 code implementation • 28 Nov 2014 • Harri Valpola
The speedup offered by cost terms from higher levels of the hierarchy and the ability to learn invariant features are demonstrated in experiments.