Search Results for author: Taco S. Cohen

Found 25 papers, 9 papers with code

Skip-Convolutions for Efficient Video Processing

1 code implementation CVPR 2021 Amirhossein Habibian, Davide Abati, Taco S. Cohen, Babak Ehteshami Bejnordi

We reformulate standard convolution to be efficiently computed on residual frames: each layer is coupled with a binary gate deciding whether a residual is important to the model prediction,~\eg foreground regions, or it can be safely skipped, e. g. background regions.

Model Compression

A Combined Deep Learning based End-to-End Video Coding Architecture for YUV Color Space

no code implementations1 Apr 2021 Ankitesh K. Singh, Hilmi E. Egilmez, Reza Pourreza, Muhammed Coban, Marta Karczewicz, Taco S. Cohen

Most of the existing deep learning based end-to-end video coding (DLEC) architectures are designed specifically for RGB color format, yet the video coding standards, including H. 264/AVC, H. 265/HEVC and H. 266/VVC developed over past few decades, have been designed primarily for YUV 4:2:0 format, where the chrominance (U and V) components are subsampled to achieve superior compression performances considering the human visual system.

Transform Network Architectures for Deep Learning based End-to-End Image/Video Coding in Subsampled Color Spaces

no code implementations27 Feb 2021 Hilmi E. Egilmez, Ankitesh K. Singh, Muhammed Coban, Marta Karczewicz, Yinhao Zhu, Yang Yang, Amir Said, Taco S. Cohen

Most of the existing deep learning based end-to-end image/video coding (DLEC) architectures are designed for non-subsampled RGB color format.

Overfitting for Fun and Profit: Instance-Adaptive Data Compression

no code implementations ICLR 2021 Ties van Rozendaal, Iris A. M. Huijben, Taco S. Cohen

At a high level, neural compression is based on an autoencoder that tries to reconstruct the input instance from a (quantized) latent representation, coupled with a prior that is used to losslessly compress these latents.

Data Compression Image Compression +1

Lossy Compression with Distortion Constrained Optimization

no code implementations8 May 2020 Ties van Rozendaal, Guillaume Sautière, Taco S. Cohen

We argue that the constrained optimization method of Rezende and Viola, 2018 is a lot more appropriate for training lossy compression models because it allows us to obtain the best possible rate subject to a distortion constraint.

Image Compression Model Selection

A Data and Compute Efficient Design for Limited-Resources Deep Learning

no code implementations21 Apr 2020 Mirgahney Mohamed, Gabriele Cesa, Taco S. Cohen, Max Welling

Thanks to their improved data efficiency, equivariant neural networks have gained increased interest in the deep learning community.


Feedback Recurrent Autoencoder for Video Compression

no code implementations9 Apr 2020 Adam Golinski, Reza Pourreza, Yang Yang, Guillaume Sautiere, Taco S. Cohen

Recent advances in deep generative modeling have enabled efficient modeling of high dimensional data distributions and opened up a new horizon for solving data compression problems.

Data Compression MS-SSIM +2

Feedback Recurrent AutoEncoder

no code implementations11 Nov 2019 Yang Yang, Guillaume Sautière, J. Jon Ryu, Taco S. Cohen

In this work, we propose a new recurrent autoencoder architecture, termed Feedback Recurrent AutoEncoder (FRAE), for online compression of sequential data with temporal dependency.

Video Compression With Rate-Distortion Autoencoders

no code implementations ICCV 2019 Amirhossein Habibian, Ties van Rozendaal, Jakub M. Tomczak, Taco S. Cohen

We employ a model that consists of a 3D autoencoder with a discrete latent space and an autoregressive prior used for entropy coding.

Motion Compensation Video Compression

Covariance in Physics and Convolutional Neural Networks

no code implementations6 Jun 2019 Miranda C. N. Cheng, Vassilis Anagiannis, Maurice Weiler, Pim de Haan, Taco S. Cohen, Max Welling

In this proceeding we give an overview of the idea of covariance (or equivariance) featured in the recent development of convolutional neural networks (CNNs).

Gauge Equivariant Convolutional Networks and the Icosahedral CNN

2 code implementations11 Feb 2019 Taco S. Cohen, Maurice Weiler, Berkay Kicanaoglu, Max Welling

The principle of equivariance to symmetry transformations enables a theoretically grounded approach to neural network architecture design.

Semantic Segmentation

Explorations in Homeomorphic Variational Auto-Encoding

1 code implementation12 Jul 2018 Luca Falorsi, Pim de Haan, Tim R. Davidson, Nicola De Cao, Maurice Weiler, Patrick Forré, Taco S. Cohen

Our experiments show that choosing manifold-valued latent variables that match the topology of the latent data manifold, is crucial to preserve the topological structure and learn a well-behaved latent space.

Sample Efficient Semantic Segmentation using Rotation Equivariant Convolutional Networks

no code implementations2 Jul 2018 Jasper Linmans, Jim Winkens, Bastiaan S. Veeling, Taco S. Cohen, Max Welling

The group equivariant CNN framework is extended for segmentation by introducing a new equivariant (G->Z2)-convolution that transforms feature maps on a group to planar feature maps.

Semantic Segmentation

3D G-CNNs for Pulmonary Nodule Detection

no code implementations12 Apr 2018 Marysia Winkels, Taco S. Cohen

Convolutional Neural Networks (CNNs) require a large amount of annotated data to learn from, which is often difficult to obtain in the medical domain.

Data Augmentation Translation

Intertwiners between Induced Representations (with Applications to the Theory of Equivariant Neural Networks)

1 code implementation28 Mar 2018 Taco S. Cohen, Mario Geiger, Maurice Weiler

In algebraic terms, the feature spaces in regular G-CNNs transform according to a regular representation of the group G, whereas the feature spaces in Steerable G-CNNs transform according to the more general induced representations of G. In order to make the network equivariant, each layer in a G-CNN is required to intertwine between the induced representations associated with its input and output space.


1 code implementation ICLR 2018 Emiel Hoogeboom, Jorn W. T. Peters, Taco S. Cohen, Max Welling

We find that, due to the reduced anisotropy of hexagonal filters, planar HexaConv provides better accuracy than planar convolution with square filters, given a fixed parameter budget.

Aerial Scene Classification Scene Classification

Spherical CNNs

3 code implementations ICLR 2018 Taco S. Cohen, Mario Geiger, Jonas Koehler, Max Welling

Convolutional Neural Networks (CNNs) have become the method of choice for learning problems involving 2D planar images.


Visualizing Deep Neural Network Decisions: Prediction Difference Analysis

1 code implementation15 Feb 2017 Luisa M. Zintgraf, Taco S. Cohen, Tameem Adel, Max Welling

This article presents the prediction difference analysis method for visualizing the response of a deep neural network to a specific input.

Decision Making

Steerable CNNs

2 code implementations27 Dec 2016 Taco S. Cohen, Max Welling

It has long been recognized that the invariance and equivariance properties of a representation are critically important for success in many vision tasks.

General Classification Image Classification

A New Method to Visualize Deep Neural Networks

no code implementations8 Mar 2016 Luisa M. Zintgraf, Taco S. Cohen, Max Welling

We present a method for visualising the response of a deep neural network to a specific input.

Decision Making

Group Equivariant Convolutional Networks

1 code implementation24 Feb 2016 Taco S. Cohen, Max Welling

We introduce Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries.

Breast Tumour Classification Colorectal Gland Segmentation: +2

Harmonic Exponential Families on Manifolds

no code implementations17 May 2015 Taco S. Cohen, Max Welling

In a range of fields including the geosciences, molecular biology, robotics and computer vision, one encounters problems that involve random variables on manifolds.

Motion Estimation

Transformation Properties of Learned Visual Representations

no code implementations24 Dec 2014 Taco S. Cohen, Max Welling

Starting with the idea that a good visual representation is one that transforms linearly under scene motions, we show, using the theory of group representations, that any such representation is equivalent to a combination of the elementary irreducible representations.

Cannot find the paper you are looking for? You can Submit a new open access paper.