Search Results for author: Lucas Theis

Found 31 papers, 12 papers with code

What makes an image realistic?

no code implementations7 Mar 2024 Lucas Theis

The last decade has seen tremendous progress in our ability to generate realistic-looking data, be it images, text, audio, or video.

C3: High-performance and low-complexity neural compression from a single image or video

1 code implementation CVPR 2024 Hyunjik Kim, Matthias Bauer, Lucas Theis, Jonathan Richard Schwarz, Emilien Dupont

On the UVG video benchmark, we match the RD performance of the Video Compression Transformer (Mentzer et al.), a well-established neural video codec, with less than 5k MACs/pixel for decoding.

Video Compression

The Unreasonable Effectiveness of Linear Prediction as a Perceptual Metric

1 code implementation6 Oct 2023 Daniel Severo, Lucas Theis, Johannes Ballé

We show how perceptual embeddings of the visual system can be constructed at inference-time with no training data or deep neural network features.

Full-Reference Image Quality Assessment MS-SSIM +1

Wasserstein Distortion: Unifying Fidelity and Realism

no code implementations5 Oct 2023 Yang Qiu, Aaron B. Wagner, Johannes Ballé, Lucas Theis

We introduce a distortion measure for images, Wasserstein distortion, that simultaneously generalizes pixel-level fidelity on the one hand and realism or perceptual quality on the other.

Texture Synthesis

High-Fidelity Image Compression with Score-based Generative Models

no code implementations26 May 2023 Emiel Hoogeboom, Eirikur Agustsson, Fabian Mentzer, Luca Versari, George Toderici, Lucas Theis

Despite the tremendous success of diffusion generative models in text-to-image generation, replicating this success in the domain of image compression has proven difficult.

Decoder Image Compression +1

Lossy Compression with Gaussian Diffusion

no code implementations17 Jun 2022 Lucas Theis, Tim Salimans, Matthew D. Hoffman, Fabian Mentzer

Unlike modern compression schemes which rely on transform coding and quantization to restrict the transmitted information, DiffC relies on the efficient communication of pixels corrupted by Gaussian noise.

Quantization

An Introduction to Neural Data Compression

3 code implementations14 Feb 2022 Yibo Yang, Stephan Mandt, Lucas Theis

Neural compression is the application of neural networks and other machine learning methods to data compression.

BIG-bench Machine Learning Data Compression +1

Optimal Compression of Locally Differentially Private Mechanisms

no code implementations29 Oct 2021 Abhin Shah, Wei-Ning Chen, Johannes Balle, Peter Kairouz, Lucas Theis

Compressing the output of \epsilon-locally differentially private (LDP) randomizers naively leads to suboptimal utility.

Algorithms for the Communication of Samples

no code implementations25 Oct 2021 Lucas Theis, Noureldin Yosri

The efficient communication of noisy data has applications in several areas of machine learning, such as neural compression or differential privacy, and is also known as reverse channel coding or the channel simulation problem.

Quantization

A coding theorem for the rate-distortion-perception function

no code implementations ICLR Workshop Neural_Compression 2021 Lucas Theis, Aaron B. Wagner

The rate-distortion-perception function (RDPF; Blau and Michaeli, 2019) has emerged as a useful tool for thinking about realism and distortion of reconstructions in lossy compression.

Importance weighted compression

no code implementations ICLR Workshop Neural_Compression 2021 Lucas Theis, Jonathan Ho

The connection between variational autoencoders (VAEs) and compression is well established and they have been used for both lossless and lossy compression.

On the advantages of stochastic encoders

no code implementations ICLR Workshop Neural_Compression 2021 Lucas Theis, Eirikur Agustsson

Stochastic encoders have been used in rate-distortion theory and neural compression because they can be easier to handle.

Universally Quantized Neural Compression

no code implementations NeurIPS 2020 Eirikur Agustsson, Lucas Theis

A popular approach to learning encoders for lossy compression is to use additive uniform noise during training as a differentiable approximation to test-time quantization.

Quantization

Discriminative Topic Modeling with Logistic LDA

1 code implementation NeurIPS 2019 Iryna Korshunova, Hanchen Xiong, Mateusz Fedoryszak, Lucas Theis

We propose logistic LDA, a novel discriminative variant of latent Dirichlet allocation which is easy to apply to arbitrary inputs.

Topic Models

Addressing Delayed Feedback for Continuous Training with Neural Networks in CTR prediction

no code implementations15 Jul 2019 Sofia Ira Ktena, Alykhan Tejani, Lucas Theis, Pranay Kumar Myana, Deepak Dilipkumar, Ferenc Huszar, Steven Yoo, Wenzhe Shi

The focus of this paper is to identify the best combination of loss functions and models that enable large-scale learning from a continuous stream of data in the presence of delayed labels.

Click-Through Rate Prediction

Faster gaze prediction with dense networks and Fisher pruning

2 code implementations Twitter 2018 Lucas Theis, Iryna Korshunova, Alykhan Tejani, Ferenc Huszár

Predicting human fixations from images has recently seen large improvements by leveraging deep representations which were pretrained for object recognition.

Gaze Estimation Gaze Prediction +3

Checkerboard artifact free sub-pixel convolution: A note on sub-pixel convolution, resize convolution and convolution resize

3 code implementations10 Jul 2017 Andrew Aitken, Christian Ledig, Lucas Theis, Jose Caballero, Zehan Wang, Wenzhe Shi

Compared to sub-pixel convolution initialized with schemes designed for standard convolution kernels, it is free from checkerboard artifacts immediately after initialization.

Lossy Image Compression with Compressive Autoencoders

4 code implementations1 Mar 2017 Lucas Theis, Wenzhe Shi, Andrew Cunningham, Ferenc Huszár

We propose a new approach to the problem of optimizing autoencoders for lossy image compression.

Image Compression

Fast Face-swap Using Convolutional Neural Networks

no code implementations ICCV 2017 Iryna Korshunova, Wenzhe Shi, Joni Dambre, Lucas Theis

We consider the problem of face swapping in images, where an input identity is transformed into a target identity while preserving pose, facial expression, and lighting.

Face Swapping Style Transfer

Amortised MAP Inference for Image Super-resolution

no code implementations14 Oct 2016 Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, Ferenc Huszár

We show that, using this architecture, the amortised MAP inference problem reduces to minimising the cross-entropy between two distributions, similar to training generative models.

Denoising Image Super-Resolution +1

Is the deconvolution layer the same as a convolutional layer?

6 code implementations22 Sep 2016 Wenzhe Shi, Jose Caballero, Lucas Theis, Ferenc Huszar, Andrew Aitken, Christian Ledig, Zehan Wang

In this note, we want to focus on aspects related to two questions most people asked us at CVPR about the network we presented.

A note on the evaluation of generative models

1 code implementation5 Nov 2015 Lucas Theis, Aäron van den Oord, Matthias Bethge

In particular, we show that three of the currently most commonly used criteria---average log-likelihood, Parzen window estimates, and visual fidelity of samples---are largely independent of each other when the data is high-dimensional.

Denoising Texture Synthesis

Generative Image Modeling Using Spatial LSTMs

no code implementations NeurIPS 2015 Lucas Theis, Matthias Bethge

Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels.

Ranked #63 on Image Generation on CIFAR-10 (bits/dimension metric)

Image Generation Texture Synthesis

A trust-region method for stochastic variational inference with applications to streaming data

no code implementations28 May 2015 Lucas Theis, Matthew D. Hoffman

However, the algorithm is prone to local optima which can make the quality of the posterior approximation sensitive to the choice of hyperparameters and initialization.

Variational Inference

A Generative Model of Natural Texture Surrogates

no code implementations28 May 2015 Niklas Ludtke, Debapriya Das, Lucas Theis, Matthias Bethge

In order to model this variability, we first applied the parametric texture algorithm of Portilla and Simoncelli to image patches of 64X64 pixels in a large database of natural images such that each image patch is then described by 655 texture parameters which specify certain statistics, such as variances and covariances of wavelet coefficients or coefficient magnitudes within that patch.

Image Compression

Supervised learning sets benchmark for robust spike detection from calcium imaging signals

no code implementations28 Feb 2015 Lucas Theis, Philipp Berens, Emmanouil Froudarakis, Jacob Reimer, Miroslav Román Rosón, Tom Baden, Thomas Euler, Andreas Tolias, Matthias Bethge

A fundamental challenge in calcium imaging has been to infer the timing of action potentials from the measured noisy calcium fluorescence traces.

Deep Gaze I: Boosting Saliency Prediction with Feature Maps Trained on ImageNet

1 code implementation4 Nov 2014 Matthias Kümmerer, Lucas Theis, Matthias Bethge

Recent results suggest that state-of-the-art saliency models perform far from optimal in predicting fixations.

Object Recognition Point Processes +1

Training sparse natural image models with a fast Gibbs sampler of an extended state space

no code implementations NeurIPS 2012 Lucas Theis, Jascha Sohl-Dickstein, Matthias Bethge

We present a new learning strategy based on an efficient blocked Gibbs sampler for sparse overcomplete linear models.

Cannot find the paper you are looking for? You can Submit a new open access paper.