no code implementations • 7 Mar 2024 • Lucas Theis
The last decade has seen tremendous progress in our ability to generate realistic-looking data, be it images, text, audio, or video.
1 code implementation • CVPR 2024 • Hyunjik Kim, Matthias Bauer, Lucas Theis, Jonathan Richard Schwarz, Emilien Dupont
On the UVG video benchmark, we match the RD performance of the Video Compression Transformer (Mentzer et al.), a well-established neural video codec, with less than 5k MACs/pixel for decoding.
1 code implementation • 6 Oct 2023 • Daniel Severo, Lucas Theis, Johannes Ballé
We show how perceptual embeddings of the visual system can be constructed at inference-time with no training data or deep neural network features.
no code implementations • 5 Oct 2023 • Yang Qiu, Aaron B. Wagner, Johannes Ballé, Lucas Theis
We introduce a distortion measure for images, Wasserstein distortion, that simultaneously generalizes pixel-level fidelity on the one hand and realism or perceptual quality on the other.
no code implementations • 26 May 2023 • Emiel Hoogeboom, Eirikur Agustsson, Fabian Mentzer, Luca Versari, George Toderici, Lucas Theis
Despite the tremendous success of diffusion generative models in text-to-image generation, replicating this success in the domain of image compression has proven difficult.
no code implementations • 17 Jun 2022 • Lucas Theis, Tim Salimans, Matthew D. Hoffman, Fabian Mentzer
Unlike modern compression schemes which rely on transform coding and quantization to restrict the transmitted information, DiffC relies on the efficient communication of pixels corrupted by Gaussian noise.
3 code implementations • 14 Feb 2022 • Yibo Yang, Stephan Mandt, Lucas Theis
Neural compression is the application of neural networks and other machine learning methods to data compression.
no code implementations • 29 Oct 2021 • Abhin Shah, Wei-Ning Chen, Johannes Balle, Peter Kairouz, Lucas Theis
Compressing the output of \epsilon-locally differentially private (LDP) randomizers naively leads to suboptimal utility.
no code implementations • 25 Oct 2021 • Lucas Theis, Noureldin Yosri
The efficient communication of noisy data has applications in several areas of machine learning, such as neural compression or differential privacy, and is also known as reverse channel coding or the channel simulation problem.
no code implementations • ICLR Workshop Neural_Compression 2021 • Lucas Theis, Aaron B. Wagner
The rate-distortion-perception function (RDPF; Blau and Michaeli, 2019) has emerged as a useful tool for thinking about realism and distortion of reconstructions in lossy compression.
no code implementations • ICLR Workshop Neural_Compression 2021 • Lucas Theis, Jonathan Ho
The connection between variational autoencoders (VAEs) and compression is well established and they have been used for both lossless and lossy compression.
no code implementations • ICLR Workshop Neural_Compression 2021 • Lucas Theis, Eirikur Agustsson
Stochastic encoders have been used in rate-distortion theory and neural compression because they can be easier to handle.
no code implementations • NeurIPS 2020 • Eirikur Agustsson, Lucas Theis
A popular approach to learning encoders for lossy compression is to use additive uniform noise during training as a differentiable approximation to test-time quantization.
1 code implementation • NeurIPS 2019 • Iryna Korshunova, Hanchen Xiong, Mateusz Fedoryszak, Lucas Theis
We propose logistic LDA, a novel discriminative variant of latent Dirichlet allocation which is easy to apply to arbitrary inputs.
no code implementations • 15 Jul 2019 • Sofia Ira Ktena, Alykhan Tejani, Lucas Theis, Pranay Kumar Myana, Deepak Dilipkumar, Ferenc Huszar, Steven Yoo, Wenzhe Shi
The focus of this paper is to identify the best combination of loss functions and models that enable large-scale learning from a continuous stream of data in the presence of delayed labels.
3 code implementations • ICCV 2019 • Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, Yong-Liang Yang
This shows that HoloGAN is the first generative model that learns 3D representations from natural images in an entirely unsupervised manner.
2 code implementations • Twitter 2018 • Lucas Theis, Iryna Korshunova, Alykhan Tejani, Ferenc Huszár
Predicting human fixations from images has recently seen large improvements by leveraging deep representations which were pretrained for object recognition.
3 code implementations • 10 Jul 2017 • Andrew Aitken, Christian Ledig, Lucas Theis, Jose Caballero, Zehan Wang, Wenzhe Shi
Compared to sub-pixel convolution initialized with schemes designed for standard convolution kernels, it is free from checkerboard artifacts immediately after initialization.
4 code implementations • 1 Mar 2017 • Lucas Theis, Wenzhe Shi, Andrew Cunningham, Ferenc Huszár
We propose a new approach to the problem of optimizing autoencoders for lossy image compression.
no code implementations • ICCV 2017 • Iryna Korshunova, Wenzhe Shi, Joni Dambre, Lucas Theis
We consider the problem of face swapping in images, where an input identity is transformed into a target identity while preserving pose, facial expression, and lighting.
no code implementations • 14 Oct 2016 • Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, Ferenc Huszár
We show that, using this architecture, the amortised MAP inference problem reduces to minimising the cross-entropy between two distributions, similar to training generative models.
6 code implementations • 22 Sep 2016 • Wenzhe Shi, Jose Caballero, Lucas Theis, Ferenc Huszar, Andrew Aitken, Christian Ledig, Zehan Wang
In this note, we want to focus on aspects related to two questions most people asked us at CVPR about the network we presented.
140 code implementations • CVPR 2017 • Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, Wenzhe Shi
The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images.
Ranked #3 on Image Super-Resolution on VggFace2 - 8x upscaling
1 code implementation • 5 Nov 2015 • Lucas Theis, Aäron van den Oord, Matthias Bethge
In particular, we show that three of the currently most commonly used criteria---average log-likelihood, Parzen window estimates, and visual fidelity of samples---are largely independent of each other when the data is high-dimensional.
no code implementations • NeurIPS 2015 • Lucas Theis, Matthias Bethge
Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels.
Ranked #63 on Image Generation on CIFAR-10 (bits/dimension metric)
no code implementations • 28 May 2015 • Lucas Theis, Matthew D. Hoffman
However, the algorithm is prone to local optima which can make the quality of the posterior approximation sensitive to the choice of hyperparameters and initialization.
no code implementations • 28 May 2015 • Niklas Ludtke, Debapriya Das, Lucas Theis, Matthias Bethge
In order to model this variability, we first applied the parametric texture algorithm of Portilla and Simoncelli to image patches of 64X64 pixels in a large database of natural images such that each image patch is then described by 655 texture parameters which specify certain statistics, such as variances and covariances of wavelet coefficients or coefficient magnitudes within that patch.
no code implementations • 28 Feb 2015 • Lucas Theis, Philipp Berens, Emmanouil Froudarakis, Jacob Reimer, Miroslav Román Rosón, Tom Baden, Thomas Euler, Andreas Tolias, Matthias Bethge
A fundamental challenge in calcium imaging has been to infer the timing of action potentials from the measured noisy calcium fluorescence traces.
1 code implementation • 4 Nov 2014 • Matthias Kümmerer, Lucas Theis, Matthias Bethge
Recent results suggest that state-of-the-art saliency models perform far from optimal in predicting fixations.
no code implementations • 17 Oct 2014 • Reshad Hosseini, Suvrit Sra, Lucas Theis, Matthias Bethge
We study modeling and inference with the Elliptical Gamma Distribution (EGD).
no code implementations • NeurIPS 2012 • Lucas Theis, Jascha Sohl-Dickstein, Matthias Bethge
We present a new learning strategy based on an efficient blocked Gibbs sampler for sparse overcomplete linear models.