1 code implementation • 10 Dec 2016 • Matti Lankinen, Hannes Heikinheimo, Pyry Takala, Tapani Raiko, Juha Karhunen
Inspired by recent research, we explore ways to model the highly morphological Finnish language at the level of characters while maintaining the performance of word-level models.
no code implementations • 7 Jun 2016 • Huiling Wang, Tapani Raiko, Lasse Lensu, Tinghuai Wang, Juha Karhunen
We propose a semi-supervised approach to adapting CNN image recognition model trained from labeled image data to the target domain exploiting both semantic evidence learned from CNN, and the intrinsic structures of video data.
5 code implementations • NeurIPS 2016 • Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, Ole Winther
Variational Autoencoders are powerful models for unsupervised learning.
no code implementations • NeurIPS 2015 • Mathias Berglund, Tapani Raiko, Mikko Honkala, Leo Kärkkäinen, Akos Vetek, Juha T. Karhunen
Although unidirectional RNNs have recently been trained successfully to model such time series, inference in the negative time direction is non-trivial.
1 code implementation • 20 Nov 2015 • Jelena Luketina, Mathias Berglund, Klaus Greff, Tapani Raiko
Hyperparameter selection generally relies on running multiple full training trials, with selection based on validation set performance.
10 code implementations • NeurIPS 2015 • Antti Rasmus, Harri Valpola, Mikko Honkala, Mathias Berglund, Tapani Raiko
We combine supervised learning with unsupervised learning in deep neural networks.
1 code implementation • 18 May 2015 • Eric Malmi, Pyry Takala, Hannu Toivonen, Tapani Raiko, Aristides Gionis
First, we develop a prediction model to identify the next line of existing lyrics from a set of candidate next lines.
1 code implementation • 30 Apr 2015 • Antti Rasmus, Harri Valpola, Tapani Raiko
We show how a deep denoising autoencoder with lateral connections can be used as an auxiliary unsupervised learning task to support supervised learning.
no code implementations • NeurIPS 2015 • Mathias Berglund, Tapani Raiko, Mikko Honkala, Leo Kärkkäinen, Akos Vetek, Juha Karhunen
Although unidirectional RNNs have recently been trained successfully to model such time series, inference in the negative time direction is non-trivial.
1 code implementation • 22 Dec 2014 • Antti Rasmus, Tapani Raiko, Harri Valpola
Suitable lateral connections between encoder and decoder are shown to allow higher layers of a denoising autoencoder (dAE) to focus on invariant representations.
1 code implementation • NeurIPS 2014 • Tapani Raiko, Yao Li, Kyunghyun Cho, Yoshua Bengio
Training of the neural autoregressive density estimator (NADE) can be viewed as doing one step of probabilistic inference on missing values in data.
Ranked #8 on Image Generation on Binarized MNIST
no code implementations • 2 Oct 2014 • Jaakko Luttinen, Tapani Raiko, Alexander Ilin
The time dependency is obtained by forming the state dynamics matrix as a time-varying linear combination of a set of matrices.
no code implementations • 11 Jun 2014 • Tapani Raiko, Mathias Berglund, Guillaume Alain, Laurent Dinh
Our experiments confirm that training stochastic networks is difficult and show that the proposed two estimators perform favorably among all the five known estimators.
1 code implementation • 5 Jun 2014 • Tapani Raiko, Li Yao, Kyunghyun Cho, Yoshua Bengio
Training of the neural autoregressive density estimator (NADE) can be viewed as doing one step of probabilistic inference on missing values in data.
Ranked #7 on Image Generation on Binarized MNIST
no code implementations • 20 Dec 2013 • Mathias Berglund, Tapani Raiko
Contrastive Divergence (CD) and Persistent Contrastive Divergence (PCD) are popular methods for training the weights of Restricted Boltzmann Machines.