no code implementations • 28 Dec 2022 • Eirikur Agustsson, David Minnen, George Toderici, Fabian Mentzer
By optimizing the rate-distortion-realism trade-off, generative compression approaches produce detailed, realistic images, even at low bit rates, instead of the blurry reconstructions produced by rate-distortion optimized models.
no code implementations • 17 Jun 2022 • Lucas Theis, Tim Salimans, Matthew D. Hoffman, Fabian Mentzer
Unlike modern compression schemes which rely on transform coding and quantization to restrict the transmitted information, DiffC relies on the efficient communication of pixels corrupted by Gaussian noise.
1 code implementation • 15 Jun 2022 • Fabian Mentzer, George Toderici, David Minnen, Sung-Jin Hwang, Sergi Caelles, Mario Lucic, Eirikur Agustsson
The resulting video compression transformer outperforms previous methods on standard video compression data sets.
no code implementations • 26 Jul 2021 • Fabian Mentzer, Eirikur Agustsson, Johannes Ballé, David Minnen, Nick Johnston, George Toderici
Our approach significantly outperforms previous neural and non-neural video compression methods in a user study, setting a new state-of-the-art in visual quality for neural methods.
2 code implementations • 24 Jun 2020 • Ren Yang, Fabian Mentzer, Luc van Gool, Radu Timofte
The experiments show that our approach achieves the state-of-the-art learned video compression performance in terms of both PSNR and MS-SSIM.
3 code implementations • NeurIPS 2020 • Fabian Mentzer, George Toderici, Michael Tschannen, Eirikur Agustsson
We extensively study how to combine Generative Adversarial Networks and learned compression to obtain a state-of-the-art generative lossy compression system.
1 code implementation • CVPR 2020 • Fabian Mentzer, Luc van Gool, Michael Tschannen
We leverage the powerful lossy image compression algorithm BPG to build a lossless image compression system.
3 code implementations • CVPR 2020 • Ren Yang, Fabian Mentzer, Luc van Gool, Radu Timofte
In our HLVC approach, the hierarchical quality benefits the coding efficiency, since the high quality information facilitates the compression and enhancement of low quality frames at encoder and decoder sides, respectively.
3 code implementations • CVPR 2019 • Fabian Mentzer, Eirikur Agustsson, Michael Tschannen, Radu Timofte, Luc van Gool
We propose the first practical learned lossless image compression system, L3C, and show that it outperforms the popular engineered codecs, PNG, WebP and JPEG 2000.
Ranked #3 on
Image Compression
on ImageNet32
1 code implementation • ICCV 2019 • Eirikur Agustsson, Michael Tschannen, Fabian Mentzer, Radu Timofte, Luc van Gool
We present a learned image compression system based on GANs, operating at extremely low bitrates.
1 code implementation • ICLR 2018 • Robert Torfason, Fabian Mentzer, Eirikur Agustsson, Michael Tschannen, Radu Timofte, Luc van Gool
Motivated by recent work on deep neural network (DNN)-based image compression methods showing potential improvements in image quality, savings in storage, and bandwidth reduction, we propose to perform image understanding tasks such as classification and segmentation directly on the compressed representations produced by these compression methods.
1 code implementation • CVPR 2018 • Fabian Mentzer, Eirikur Agustsson, Michael Tschannen, Radu Timofte, Luc van Gool
During training, the auto-encoder makes use of the context model to estimate the entropy of its representation, and the context model is concurrently updated to learn the dependencies between the symbols in the latent representation.
no code implementations • NeurIPS 2017 • Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc van Gool
We present a new approach to learn compressible representations in deep architectures with an end-to-end training strategy.
no code implementations • 26 Sep 2016 • Michael Tschannen, Lukas Cavigelli, Fabian Mentzer, Thomas Wiatowski, Luca Benini
We propose a highly structured neural network architecture for semantic segmentation with an extremely small model size, suitable for low-power embedded and mobile platforms.