1 code implementation • 12 Oct 2023 • Philip Fradkin, Ruian Shi, Bo wang, Brendan Frey, Leo J. Lee
In the face of rapidly accumulating genomic data, our understanding of the RNA regulatory code remains incomplete.
1 code implementation • NeurIPS 2017 • Alireza Makhzani, Brendan Frey
In this paper, we describe the "PixelGAN autoencoder", a generative autoencoder in which the generative path is a convolutional autoregressive neural network on pixels (PixelCNN) that is conditioned on a latent code, and the recognition path uses a generative adversarial network (GAN) to impose a prior distribution on the latent code.
Ranked #9 on Unsupervised Image Classification on MNIST
28 code implementations • 18 Nov 2015 • Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, Brendan Frey
In this paper, we propose the "adversarial autoencoder" (AAE), which is a probabilistic autoencoder that uses the recently proposed generative adversarial networks (GAN) to perform variational inference by matching the aggregated posterior of the hidden code vector of the autoencoder with an arbitrary prior distribution.
Ranked #6 on Unsupervised Image Classification on MNIST
no code implementations • 17 Nov 2015 • Oren Z. Kraus, Lei Jimmy Ba, Brendan Frey
Convolutional neural networks (CNN) have achieved state of the art performance on both classification and segmentation tasks.
no code implementations • NeurIPS 2015 • Jimmy Ba, Roger Grosse, Ruslan Salakhutdinov, Brendan Frey
Despite their success, convolutional neural networks are computationally expensive because they must examine all image locations.
1 code implementation • NeurIPS 2015 • Alireza Makhzani, Brendan Frey
In this paper, we propose a winner-take-all method for learning hierarchical sparse representations in an unsupervised fashion.
no code implementations • 6 May 2014 • Siamak Ravanbakhsh, Russell Greiner, Brendan Frey
During the learning, to produce a sample from the current model, we start from a training data and descend in the energy landscape of the "perturbed model", for a fixed number of steps, or until a local optima is reached.
3 code implementations • 19 Dec 2013 • Alireza Makhzani, Brendan Frey
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks.
no code implementations • NeurIPS 2013 • Jimmy Ba, Brendan Frey
For example, our model achieves 5. 8% error on the NORB test set, which is better than state-of-the-art results obtained using convolutional architectures. "