In this paper we propose a new scalable deep generative model for images, called the Deep Gaussian Mixture Model, that is a straightforward but powerful generalization of GMMs to multiple layers.
Ranked #69 on Image Generation on CIFAR-10 (bits/dimension metric)
In this work, we explore the use of memristor networks for analog approximate computation, based on a machine learning framework called reservoir computing.
We also show that recent advances in deep learning translate very well to the music recommendation setting, with deep convolutional neural networks significantly outperforming the traditional approach.
The usability of Brain Computer Interfaces (BCI) based on the P300 speller is severely hindered by the need for long training times and many repetitions of the same stimulus.
Automatic speech recognition has gradually improved over the years, but the reliable recognition of unconstrained speech is still not within reach.