Search Results for author: Thomas Bird

Found 7 papers, 1 papers with code

A Memory Transformer Network for Incremental Learning

no code implementations10 Oct 2022 Ahmet Iscen, Thomas Bird, Mathilde Caron, Alireza Fathi, Cordelia Schmid

We study class-incremental learning, a training setup in which new classes of data are observed over time for the model to learn from.

class-incremental learning Incremental Learning

3D Scene Compression through Entropy Penalized Neural Representation Functions

no code implementations26 Apr 2021 Thomas Bird, Johannes Ballé, Saurabh Singh, Philip A. Chou

We unify these steps by directly compressing an implicit representation of the scene, a function that maps spatial coordinates to a radiance vector field, which can then be queried to render arbitrary viewpoints.

Reducing the Computational Cost of Deep Generative Models with Binary Neural Networks

no code implementations ICLR 2021 Thomas Bird, Friso H. Kingma, David Barber

In this work we show, for the first time, that we can successfully train generative models which utilize binary neural networks.

HiLLoC: Lossless Image Compression with Hierarchical Latent Variable Models

1 code implementation ICLR 2020 James Townsend, Thomas Bird, Julius Kunze, David Barber

We make the following striking observation: fully convolutional VAE models trained on 32x32 ImageNet can generalize well, not just to 64x64 but also to far larger photographs, with no changes to the model.

Image Compression

Variational f-divergence Minimization

no code implementations27 Jul 2019 Mingtian Zhang, Thomas Bird, Raza Habib, Tianlin Xu, David Barber

Probabilistic models are often trained by maximum likelihood, which corresponds to minimizing a specific f-divergence between the model and data distribution.

Image Generation

Training generative latent models by variational f-divergence minimization

no code implementations27 Sep 2018 Mingtian Zhang, Thomas Bird, Raza Habib, Tianlin Xu, David Barber

Probabilistic models are often trained by maximum likelihood, which corresponds to minimizing a specific form of f-divergence between the model and data distribution.

Stochastic Variational Optimization

no code implementations13 Sep 2018 Thomas Bird, Julius Kunze, David Barber

These approaches are of particular interest because they are parallelizable.

Cannot find the paper you are looking for? You can Submit a new open access paper.