no code implementations • 19 Nov 2021 • Ties van Rozendaal, Johann Brehmer, Yunfan Zhang, Reza Pourreza, Auke Wiggers, Taco S. Cohen
We introduce a video compression algorithm based on instance-adaptive learning.
1 code implementation • CVPR 2021 • Amirhossein Habibian, Davide Abati, Taco S. Cohen, Babak Ehteshami Bejnordi
We reformulate standard convolution to be efficiently computed on residual frames: each layer is coupled with a binary gate deciding whether a residual is important to the model prediction,~\eg foreground regions, or it can be safely skipped, e. g. background regions.
no code implementations • 1 Apr 2021 • Ankitesh K. Singh, Hilmi E. Egilmez, Reza Pourreza, Muhammed Coban, Marta Karczewicz, Taco S. Cohen
Most of the existing deep learning based end-to-end video coding (DLEC) architectures are designed specifically for RGB color format, yet the video coding standards, including H. 264/AVC, H. 265/HEVC and H. 266/VVC developed over past few decades, have been designed primarily for YUV 4:2:0 format, where the chrominance (U and V) components are subsampled to achieve superior compression performances considering the human visual system.
no code implementations • 27 Feb 2021 • Hilmi E. Egilmez, Ankitesh K. Singh, Muhammed Coban, Marta Karczewicz, Yinhao Zhu, Yang Yang, Amir Said, Taco S. Cohen
Most of the existing deep learning based end-to-end image/video coding (DLEC) architectures are designed for non-subsampled RGB color format.
no code implementations • ICLR 2021 • Ties van Rozendaal, Iris A. M. Huijben, Taco S. Cohen
At a high level, neural compression is based on an autoencoder that tries to reconstruct the input instance from a (quantized) latent representation, coupled with a prior that is used to losslessly compress these latents.
no code implementations • 8 May 2020 • Ties van Rozendaal, Guillaume Sautière, Taco S. Cohen
We argue that the constrained optimization method of Rezende and Viola, 2018 is a lot more appropriate for training lossy compression models because it allows us to obtain the best possible rate subject to a distortion constraint.
no code implementations • 21 Apr 2020 • Mirgahney Mohamed, Gabriele Cesa, Taco S. Cohen, Max Welling
Thanks to their improved data efficiency, equivariant neural networks have gained increased interest in the deep learning community.
no code implementations • 9 Apr 2020 • Adam Golinski, Reza Pourreza, Yang Yang, Guillaume Sautiere, Taco S. Cohen
Recent advances in deep generative modeling have enabled efficient modeling of high dimensional data distributions and opened up a new horizon for solving data compression problems.
no code implementations • pproximateinference AABI Symposium 2021 • Emiel Hoogeboom, Taco S. Cohen, Jakub M. Tomczak
Media is generally stored digitally and is therefore discrete.
no code implementations • 11 Nov 2019 • Yang Yang, Guillaume Sautière, J. Jon Ryu, Taco S. Cohen
In this work, we propose a new recurrent autoencoder architecture, termed Feedback Recurrent AutoEncoder (FRAE), for online compression of sequential data with temporal dependency.
no code implementations • ICCV 2019 • Amirhossein Habibian, Ties van Rozendaal, Jakub M. Tomczak, Taco S. Cohen
We employ a model that consists of a 3D autoencoder with a discrete latent space and an autoregressive prior used for entropy coding.
no code implementations • 6 Jun 2019 • Miranda C. N. Cheng, Vassilis Anagiannis, Maurice Weiler, Pim de Haan, Taco S. Cohen, Max Welling
In this proceeding we give an overview of the idea of covariance (or equivariance) featured in the recent development of convolutional neural networks (CNNs).
2 code implementations • 11 Feb 2019 • Taco S. Cohen, Maurice Weiler, Berkay Kicanaoglu, Max Welling
The principle of equivariance to symmetry transformations enables a theoretically grounded approach to neural network architecture design.
Ranked #23 on Semantic Segmentation on Stanford2D3D Panoramic
1 code implementation • 12 Jul 2018 • Luca Falorsi, Pim de Haan, Tim R. Davidson, Nicola De Cao, Maurice Weiler, Patrick Forré, Taco S. Cohen
Our experiments show that choosing manifold-valued latent variables that match the topology of the latent data manifold, is crucial to preserve the topological structure and learn a well-behaved latent space.
no code implementations • 2 Jul 2018 • Jasper Linmans, Jim Winkens, Bastiaan S. Veeling, Taco S. Cohen, Max Welling
The group equivariant CNN framework is extended for segmentation by introducing a new equivariant (G->Z2)-convolution that transforms feature maps on a group to planar feature maps.
no code implementations • 12 Apr 2018 • Marysia Winkels, Taco S. Cohen
Convolutional Neural Networks (CNNs) require a large amount of annotated data to learn from, which is often difficult to obtain in the medical domain.
1 code implementation • 28 Mar 2018 • Taco S. Cohen, Mario Geiger, Maurice Weiler
In algebraic terms, the feature spaces in regular G-CNNs transform according to a regular representation of the group G, whereas the feature spaces in Steerable G-CNNs transform according to the more general induced representations of G. In order to make the network equivariant, each layer in a G-CNN is required to intertwine between the induced representations associated with its input and output space.
1 code implementation • ICLR 2018 • Emiel Hoogeboom, Jorn W. T. Peters, Taco S. Cohen, Max Welling
We find that, due to the reduced anisotropy of hexagonal filters, planar HexaConv provides better accuracy than planar convolution with square filters, given a fixed parameter budget.
3 code implementations • ICLR 2018 • Taco S. Cohen, Mario Geiger, Jonas Koehler, Max Welling
Convolutional Neural Networks (CNNs) have become the method of choice for learning problems involving 2D planar images.
1 code implementation • 15 Feb 2017 • Luisa M. Zintgraf, Taco S. Cohen, Tameem Adel, Max Welling
This article presents the prediction difference analysis method for visualizing the response of a deep neural network to a specific input.
3 code implementations • 27 Dec 2016 • Taco S. Cohen, Max Welling
It has long been recognized that the invariance and equivariance properties of a representation are critically important for success in many vision tasks.
no code implementations • 8 Mar 2016 • Luisa M. Zintgraf, Taco S. Cohen, Max Welling
We present a method for visualising the response of a deep neural network to a specific input.
1 code implementation • 24 Feb 2016 • Taco S. Cohen, Max Welling
We introduce Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries.
Ranked #6 on Breast Tumour Classification on PCam
Breast Tumour Classification Colorectal Gland Segmentation: +2
no code implementations • 17 May 2015 • Taco S. Cohen, Max Welling
In a range of fields including the geosciences, molecular biology, robotics and computer vision, one encounters problems that involve random variables on manifolds.
no code implementations • 24 Dec 2014 • Taco S. Cohen, Max Welling
Starting with the idea that a good visual representation is one that transforms linearly under scene motions, we show, using the theory of group representations, that any such representation is equivalent to a combination of the elementary irreducible representations.