Search Results for author: Nal Kalchbrenner

Found 29 papers, 18 papers with code

Deep Learning for Day Forecasts from Sparse Observations

no code implementations6 Jun 2023 Marcin Andrychowicz, Lasse Espeholt, Di Li, Samier Merchant, Alexander Merose, Fred Zyda, Shreya Agrawal, Nal Kalchbrenner

The ability of neural models to make a prediction in less than a second once the data is available and to do so with very high temporal and spatial resolution, and the ability to learn directly from atmospheric observations, are just some of these models' unique advantages.

Deep Learning

Do better ImageNet classifiers assess perceptual similarity better?

no code implementations9 Mar 2022 Manoj Kumar, Neil Houlsby, Nal Kalchbrenner, Ekin D. Cubuk

Perceptual distances between images, as measured in the space of pre-trained deep features, have outperformed prior low-level, pixel-based metrics on assessing perceptual similarity.

Skillful Twelve Hour Precipitation Forecasts using Large Context Neural Networks

2 code implementations14 Nov 2021 Lasse Espeholt, Shreya Agrawal, Casper Sønderby, Manoj Kumar, Jonathan Heek, Carla Bromberg, Cenk Gazen, Jason Hickey, Aaron Bell, Nal Kalchbrenner

An emerging class of weather models based on neural networks represents a paradigm shift in weather forecasting: the models learn the required transformations from data instead of relying on hand-coded physics and are computationally efficient.

energy management Management +2

Gradual Domain Adaptation in the Wild: When Intermediate Distributions are Absent

no code implementations29 Sep 2021 Samira Abnar, Rianne van den Berg, Golnaz Ghiasi, Mostafa Dehghani, Nal Kalchbrenner, Hanie Sedghi

It is shown that under the following two assumptions: (a) access to samples from intermediate distributions, and (b) samples being annotated with the amount of change from the source distribution; self-training can be successfully applied on gradually shifted samples to adapt the model toward the target distribution.

Domain Adaptation

Gradual Domain Adaptation in the Wild:When Intermediate Distributions are Absent

1 code implementation10 Jun 2021 Samira Abnar, Rianne van den Berg, Golnaz Ghiasi, Mostafa Dehghani, Nal Kalchbrenner, Hanie Sedghi

It has been shown that under the following two assumptions: (a) access to samples from intermediate distributions, and (b) samples being annotated with the amount of change from the source distribution, self-training can be successfully applied on gradually shifted samples to adapt the model toward the target distribution.

Domain Adaptation

Colorization Transformer

2 code implementations ICLR 2021 Manoj Kumar, Dirk Weissenborn, Nal Kalchbrenner

We present the Colorization Transformer, a novel approach for diverse high fidelity image colorization based on self-attention.

Colorization Image Colorization

A Spectral Energy Distance for Parallel Speech Synthesis

2 code implementations NeurIPS 2020 Alexey A. Gritsenko, Tim Salimans, Rianne van den Berg, Jasper Snoek, Nal Kalchbrenner

Speech synthesis is an important practical generative modeling problem that has seen great progress over the last few years, with likelihood-based autoregressive neural models now outperforming traditional concatenative systems.

scoring rule Speech Synthesis

Deep Learning Based Text Classification: A Comprehensive Review

2 code implementations6 Apr 2020 Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, Jianfeng Gao

Deep learning based models have surpassed classical machine learning based approaches in various text classification tasks, including sentiment analysis, news categorization, question answering, and natural language inference.

BIG-bench Machine Learning Deep Learning +6

Axial Attention in Multidimensional Transformers

2 code implementations20 Dec 2019 Jonathan Ho, Nal Kalchbrenner, Dirk Weissenborn, Tim Salimans

We propose Axial Transformers, a self-attention-based autoregressive model for images and other data organized as high dimensional tensors.

Ranked #29 on Image Generation on ImageNet 64x64 (Bits per dim metric)

Image Generation

Bayesian Inference for Large Scale Image Classification

no code implementations9 Aug 2019 Jonathan Heek, Nal Kalchbrenner

We show that ATMC is intrinsically robust to overfitting on the training data and that ATMC provides a better calibrated measure of uncertainty compared to the optimization baseline.

Bayesian Inference Classification +4

Generating High Fidelity Images with Subscale Pixel Networks and Multidimensional Upscaling

no code implementations ICLR 2019 Jacob Menick, Nal Kalchbrenner

To address the latter challenge, we propose to use Multidimensional Upscaling to grow an image in both size and depth via intermediate stages utilising distinct SPNs.

Decoder Image Generation +1

Tensor2Tensor for Neural Machine Translation

15 code implementations WS 2018 Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan N. Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, Jakob Uszkoreit

Tensor2Tensor is a library for deep learning models that is well-suited for neural machine translation and includes the reference implementation of the state-of-the-art Transformer model.

Deep Learning Machine Translation +1

Parallel Multiscale Autoregressive Density Estimation

no code implementations ICML 2017 Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas

Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images.

Conditional Image Generation Density Estimation +2

Neural Machine Translation in Linear Time

11 code implementations31 Oct 2016 Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, Koray Kavukcuoglu

The ByteNet is a one-dimensional convolutional neural network that is composed of two parts, one to encode the source sequence and the other to decode the target sequence.

Decoder Language Modelling +3

Video Pixel Networks

1 code implementation ICML 2017 Nal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, Alex Graves, Koray Kavukcuoglu

The VPN approaches the best possible performance on the Moving MNIST benchmark, a leap over the previous state of the art, and the generated videos show only minor deviations from the ground truth.

 Ranked #1 on Video Prediction on KTH (Cond metric)

Video Prediction

Associative Long Short-Term Memory

3 code implementations9 Feb 2016 Ivo Danihelka, Greg Wayne, Benigno Uria, Nal Kalchbrenner, Alex Graves

We investigate a new method to augment recurrent neural networks with extra memory without increasing the number of network parameters.

Memorization Retrieval

Pixel Recurrent Neural Networks

18 code implementations25 Jan 2016 Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu

Modeling the distribution of natural images is a landmark problem in unsupervised learning.

Image Generation

Grid Long Short-Term Memory

1 code implementation6 Jul 2015 Nal Kalchbrenner, Ivo Danihelka, Alex Graves

This paper introduces Grid Long Short-Term Memory, a network of LSTM cells arranged in a multidimensional grid that can be applied to vectors, sequences or higher dimensional data such as images.

Language Modelling Memorization +1

Resolving Lexical Ambiguity in Tensor Regression Models of Meaning

no code implementations ACL 2014 Dimitri Kartsaklis, Nal Kalchbrenner, Mehrnoosh Sadrzadeh

This paper provides a method for improving tensor-based compositional distributional models of meaning by the addition of an explicit disambiguation step prior to composition.

regression

Modelling, Visualising and Summarising Documents with a Single Convolutional Neural Network

no code implementations15 Jun 2014 Misha Denil, Alban Demiraj, Nal Kalchbrenner, Phil Blunsom, Nando de Freitas

Capturing the compositional process which maps the meaning of words to that of documents is a central challenge for researchers in Natural Language Processing and Information Retrieval.

Feature Engineering Information Retrieval +2

Cannot find the paper you are looking for? You can Submit a new open access paper.