no code implementations • Neural Information Processing Systems 2024 • Nitesh Bharadwaj Gundavarapu, Luke Friedman, Raghav Goyal, Chaitra Hegde, Eirikur Agustsson, Sagar M. Waghmare, Mikhail Sirotenko, Ming-Hsuan Yang, Tobias Weyand, Boqing Gong, Leonid Sigal
Nevertheless, the majority of prior works that leverage MAE pre-training have focused on relatively short video representations (16 / 32 frames in length) largely due to hardware memory and compute limitations that scale poorly with video length due to the dense memory-intensive self-attention decoding.
Ranked #1 on Action Recognition on Diving-48 (using extra training data)
3 code implementations • 27 Sep 2023 • Fabian Mentzer, David Minnen, Eirikur Agustsson, Michael Tschannen
Each dimension is quantized to a small set of fixed values, leading to an (implicit) codebook given by the product of these sets.
no code implementations • 26 May 2023 • Emiel Hoogeboom, Eirikur Agustsson, Fabian Mentzer, Luca Versari, George Toderici, Lucas Theis
Despite the tremendous success of diffusion generative models in text-to-image generation, replicating this success in the domain of image compression has proven difficult.
no code implementations • ICCV 2023 • Fabian Mentzer, Eirikur Agustsson, Michael Tschannen
We show how bidirectional transformers trained for masked token prediction can be applied to neural image compression to achieve state-of-the-art results.
1 code implementation • CVPR 2023 • Eirikur Agustsson, David Minnen, George Toderici, Fabian Mentzer
By optimizing the rate-distortion-realism trade-off, generative compression approaches produce detailed, realistic images, even at low bit rates, instead of the blurry reconstructions produced by rate-distortion optimized models.
1 code implementation • 15 Jun 2022 • Fabian Mentzer, George Toderici, David Minnen, Sung-Jin Hwang, Sergi Caelles, Mario Lucic, Eirikur Agustsson
The resulting video compression transformer outperforms previous methods on standard video compression data sets.
no code implementations • 26 Jul 2021 • Fabian Mentzer, Eirikur Agustsson, Johannes Ballé, David Minnen, Nick Johnston, George Toderici
Our approach significantly outperforms previous neural and non-neural video compression methods in a user study, setting a new state-of-the-art in visual quality for neural methods.
no code implementations • ICLR Workshop Neural_Compression 2021 • Lucas Theis, Eirikur Agustsson
Stochastic encoders have been used in rate-distortion theory and neural compression because they can be easier to handle.
4 code implementations • NeurIPS 2020 • Fabian Mentzer, George Toderici, Michael Tschannen, Eirikur Agustsson
We extensively study how to combine Generative Adversarial Networks and learned compression to obtain a state-of-the-art generative lossy compression system.
no code implementations • NeurIPS 2020 • Eirikur Agustsson, Lucas Theis
A popular approach to learning encoders for lossy compression is to use additive uniform noise during training as a differentiable approximation to test-time quantization.
no code implementations • CVPR 2020 • Eirikur Agustsson, David Minnen, Nick Johnston, Johannes Balle, Sung Jin Hwang, George Toderici
Despite considerable progress on end-to-end optimized deep networks for image compression, video coding remains a challenging task.
no code implementations • CVPR 2019 • Eirikur Agustsson, Jasper R. R. Uijlings, Vittorio Ferrari
We propose an interactive, scribble-based annotation framework which operates on the whole image to produce segmentations for all regions.
3 code implementations • CVPR 2019 • Fabian Mentzer, Eirikur Agustsson, Michael Tschannen, Radu Timofte, Luc van Gool
We propose the first practical learned lossless image compression system, L3C, and show that it outperforms the popular engineered codecs, PNG, WebP and JPEG 2000.
Ranked #3 on Image Compression on ImageNet32
no code implementations • 3 Oct 2018 • Andrey Ignatov, Radu Timofte, Thang Van Vu, Tung Minh Luu, Trung X. Pham, Cao Van Nguyen, Yongwoo Kim, Jae-Seok Choi, Munchurl Kim, Jie Huang, Jiewen Ran, Chen Xing, Xingguang Zhou, Pengfei Zhu, Mingrui Geng, Yawei Li, Eirikur Agustsson, Shuhang Gu, Luc van Gool, Etienne de Stoutz, Nikolay Kobyshev, Kehui Nie, Yan Zhao, Gen Li, Tong Tong, Qinquan Gao, Liu Hanwen, Pablo Navarrete Michelini, Zhu Dan, Hu Fengshuo, Zheng Hui, Xiumei Wang, Lirui Deng, Rang Meng, Jinghui Qin, Yukai Shi, Wushao Wen, Liang Lin, Ruicheng Feng, Shixiang Wu, Chao Dong, Yu Qiao, Subeesh Vasu, Nimisha Thekke Madam, Praveen Kandula, A. N. Rajagopalan, Jie Liu, Cheolkon Jung
This paper reviews the first challenge on efficient perceptual image enhancement with the focus on deploying deep learning models on smartphones.
1 code implementation • NeurIPS 2018 • Michael Tschannen, Eirikur Agustsson, Mario Lucic
We propose and study the problem of distribution-preserving lossy compression.
1 code implementation • ICCV 2019 • Eirikur Agustsson, Michael Tschannen, Fabian Mentzer, Radu Timofte, Luc van Gool
We present a learned image compression system based on GANs, operating at extremely low bitrates.
1 code implementation • ICLR 2018 • Robert Torfason, Fabian Mentzer, Eirikur Agustsson, Michael Tschannen, Radu Timofte, Luc van Gool
Motivated by recent work on deep neural network (DNN)-based image compression methods showing potential improvements in image quality, savings in storage, and bandwidth reduction, we propose to perform image understanding tasks such as classification and segmentation directly on the compressed representations produced by these compression methods.
1 code implementation • CVPR 2018 • Fabian Mentzer, Eirikur Agustsson, Michael Tschannen, Radu Timofte, Luc van Gool
During training, the auto-encoder makes use of the context model to estimate the entropy of its representation, and the context model is concurrently updated to learn the dependencies between the symbols in the latent representation.
1 code implementation • 19 Dec 2017 • Asha Anoosheh, Eirikur Agustsson, Radu Timofte, Luc Van Gool
This year alone has seen unprecedented leaps in the area of learning-based image translation, namely CycleGAN, by Zhu et al.
Ranked #6 on Facial Expression Translation on CelebA
no code implementations • CVPR 2018 • Alexander Sage, Eirikur Agustsson, Radu Timofte, Luc van Gool
We propose the use of synthetic labels obtained through clustering to disentangle and stabilize GAN training.
no code implementations • ICLR 2018 • Eirikur Agustsson, Alexander Sage, Radu Timofte, Luc van Gool
Generative models such as Variational Auto Encoders (VAEs) and Generative Adversarial Networks (GANs) are typically trained for a fixed prior distribution in the latent space, such as uniform or Gaussian.
no code implementations • ICCV 2017 • Eirikur Agustsson, Radu Timofte, Luc van Gool
We propose the Anchored Regression Network (ARN), a nonlinear regression network which can be seamlessly integrated into various networks or can be used stand-alone when the features have already been fixed.
no code implementations • 9 Aug 2017 • Wen Li, Li-Min Wang, Wei Li, Eirikur Agustsson, Luc van Gool
Our new WebVision database and relevant studies in this work would benefit the advance of learning state-of-the-art visual models with minimum supervision based on web data.
no code implementations • 16 May 2017 • Wen Li, Li-Min Wang, Wei Li, Eirikur Agustsson, Jesse Berent, Abhinav Gupta, Rahul Sukthankar, Luc van Gool
The 2017 WebVision challenge consists of two tracks, the image classification task on WebVision test set, and the transfer learning task on PASCAL VOC 2012 dataset.
no code implementations • NeurIPS 2017 • Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc van Gool
We present a new approach to learn compressible representations in deep architectures with an end-to-end training strategy.
no code implementations • 30 May 2016 • Eirikur Agustsson, Radu Timofte, Luc van Gool
k^2-means builds upon the standard k-means (Lloyd's algorithm) and combines a new strategy to accelerate the convergence with a new low time complexity divisive initialization.
no code implementations • 13 Mar 2014 • Reinhard Heckel, Eirikur Agustsson, Helmut Bölcskei
Subspace clustering refers to the problem of clustering high-dimensional data points into a union of low-dimensional linear subspaces, where the number of subspaces, their dimensions and orientations are all unknown.