Search Results for author: Michael Tschannen

Found 36 papers, 16 papers with code

Neural Face Video Compression using Multiple Views

no code implementations29 Mar 2022 Anna Volokitin, Stefan Brugger, Ali Benlalah, Sebastian Martin, Brian Amberg, Michael Tschannen

Recent advances in deep generative models led to the development of neural face video compression codecs that use an order of magnitude less bandwidth than engineered codecs.

Video Compression

Unconditional Synthesis of Complex Scenes Using a Semantic Bottleneck

no code implementations1 Jan 2021 Samaneh Azadi, Michael Tschannen, Eric Tzeng, Sylvain Gelly, Trevor Darrell, Mario Lucic

Coupling the high-fidelity generation capabilities of label-conditional image synthesis methods with the flexibility of unconditional generative models, we propose a semantic bottleneck GAN model for unconditional synthesis of complex scenes.

Image Generation

High-Fidelity Generative Image Compression

3 code implementations NeurIPS 2020 Fabian Mentzer, George Toderici, Michael Tschannen, Eirikur Agustsson

We extensively study how to combine Generative Adversarial Networks and learned compression to obtain a state-of-the-art generative lossy compression system.

Image Compression

Disentangling Factors of Variations Using Few Labels

no code implementations ICLR Workshop LLD 2019 Francesco Locatello, Michael Tschannen, Stefan Bauer, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem

Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations.

Disentanglement Model Selection

Learning Better Lossless Compression Using Lossy Compression

1 code implementation CVPR 2020 Fabian Mentzer, Luc van Gool, Michael Tschannen

We leverage the powerful lossy image compression algorithm BPG to build a lossless image compression system.

Image Compression

Automatic Shortcut Removal for Self-Supervised Representation Learning

no code implementations ICML 2020 Matthias Minderer, Olivier Bachem, Neil Houlsby, Michael Tschannen

In self-supervised visual representation learning, a feature extractor is trained on a "pretext task" for which labels can be generated cheaply, without human annotation.

Representation Learning

Weakly-Supervised Disentanglement Without Compromises

2 code implementations ICML 2020 Francesco Locatello, Ben Poole, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem, Michael Tschannen

Third, we perform a large-scale empirical study and show that such pairs of observations are sufficient to reliably learn disentangled representations on several benchmark data sets.

Disentanglement Fairness

Self-Supervised Learning of Video-Induced Visual Invariances

no code implementations CVPR 2020 Michael Tschannen, Josip Djolonga, Marvin Ritter, Aravindh Mahendran, Xiaohua Zhai, Neil Houlsby, Sylvain Gelly, Mario Lucic

We propose a general framework for self-supervised learning of transferable visual representations based on Video-Induced Visual Invariances (VIVI).

Ranked #14 on Image Classification on VTAB-1k (using extra training data)

Image Classification Self-Supervised Learning +1

Semantic Bottleneck Scene Generation

2 code implementations26 Nov 2019 Samaneh Azadi, Michael Tschannen, Eric Tzeng, Sylvain Gelly, Trevor Darrell, Mario Lucic

For the former, we use an unconditional progressive segmentation generation network that captures the distribution of realistic semantic scene layouts.

Conditional Image Generation Image-to-Image Translation +1

On Mutual Information Maximization for Representation Learning

2 code implementations ICLR 2020 Michael Tschannen, Josip Djolonga, Paul K. Rubenstein, Sylvain Gelly, Mario Lucic

Many recent methods for unsupervised or self-supervised representation learning train feature extractors by maximizing an estimate of the mutual information (MI) between different views of the data.

Inductive Bias Representation Learning +1

Disentangling Factors of Variation Using Few Labels

no code implementations3 May 2019 Francesco Locatello, Michael Tschannen, Stefan Bauer, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem

Recently, Locatello et al. (2019) demonstrated that unsupervised disentanglement learning without inductive biases is theoretically impossible and that existing inductive biases and unsupervised methods do not allow to consistently learn disentangled representations.

Disentanglement Model Selection

Recent Advances in Autoencoder-Based Representation Learning

no code implementations12 Dec 2018 Michael Tschannen, Olivier Bachem, Mario Lucic

Finally, we provide an analysis of autoencoder-based representation learning through the lens of rate-distortion theory and identify a clear tradeoff between the amount of prior knowledge available about the downstream tasks, and how useful the representation is for this task.

Disentanglement

Practical Full Resolution Learned Lossless Image Compression

3 code implementations CVPR 2019 Fabian Mentzer, Eirikur Agustsson, Michael Tschannen, Radu Timofte, Luc van Gool

We propose the first practical learned lossless image compression system, L3C, and show that it outperforms the popular engineered codecs, PNG, WebP and JPEG 2000.

Image Compression

Born Again Neural Networks

1 code implementation ICML 2018 Tommaso Furlanello, Zachary C. Lipton, Michael Tschannen, Laurent Itti, Anima Anandkumar

Knowledge distillation (KD) consists of transferring knowledge from one machine learning model (the teacher}) to another (the student).

Computer Vision Knowledge Distillation

Towards Image Understanding from Deep Compression without Decoding

1 code implementation ICLR 2018 Robert Torfason, Fabian Mentzer, Eirikur Agustsson, Michael Tschannen, Radu Timofte, Luc van Gool

Motivated by recent work on deep neural network (DNN)-based image compression methods showing potential improvements in image quality, savings in storage, and bandwidth reduction, we propose to perform image understanding tasks such as classification and segmentation directly on the compressed representations produced by these compression methods.

Classification General Classification +1

Conditional Probability Models for Deep Image Compression

1 code implementation CVPR 2018 Fabian Mentzer, Eirikur Agustsson, Michael Tschannen, Radu Timofte, Luc van Gool

During training, the auto-encoder makes use of the context model to estimate the entropy of its representation, and the context model is concurrently updated to learn the dependencies between the symbols in the latent representation.

Image Compression MS-SSIM +2

StrassenNets: Deep Learning with a Multiplication Budget

1 code implementation ICML 2018 Michael Tschannen, Aran Khanna, Anima Anandkumar

A large fraction of the arithmetic operations required to evaluate deep neural networks (DNNs) consists of matrix multiplications, in both convolution and fully connected layers.

Knowledge Distillation Language Modelling +1

Convolutional Recurrent Neural Networks for Electrocardiogram Classification

1 code implementation17 Oct 2017 Martin Zihlmann, Dmytro Perekrestenko, Michael Tschannen

We propose two deep neural network architectures for classification of arbitrary-length electrocardiogram (ECG) recordings and evaluate them on the atrial fibrillation (AF) classification data set provided by the PhysioNet/CinC Challenge 2017.

Classification Data Augmentation +1

Greedy Algorithms for Cone Constrained Optimization with Convergence Guarantees

no code implementations NeurIPS 2017 Francesco Locatello, Michael Tschannen, Gunnar Rätsch, Martin Jaggi

Greedy optimization methods such as Matching Pursuit (MP) and Frank-Wolfe (FW) algorithms regained popularity in recent years due to their simplicity, effectiveness and theoretical guarantees.

A Unified Optimization View on Generalized Matching Pursuit and Frank-Wolfe

no code implementations21 Feb 2017 Francesco Locatello, Rajiv Khanna, Michael Tschannen, Martin Jaggi

Two of the most fundamental prototypes of greedy optimization are the matching pursuit and Frank-Wolfe algorithms.

Noisy subspace clustering via matching pursuits

no code implementations11 Dec 2016 Michael Tschannen, Helmut Bölcskei

The clustering conditions we obtain for SSC-OMP and SSC-MP are similar to those for SSC and for the thresholding-based subspace clustering (TSC) algorithm due to Heckel and B\"olcskei.

Robust nonparametric nearest neighbor random process clustering

no code implementations4 Dec 2016 Michael Tschannen, Helmut Bölcskei

We consider the problem of clustering noisy finite-length observations of stationary ergodic random processes according to their generative models without prior knowledge of the model statistics and the number of generative models.

Deep Structured Features for Semantic Segmentation

no code implementations26 Sep 2016 Michael Tschannen, Lukas Cavigelli, Fabian Mentzer, Thomas Wiatowski, Luca Benini

We propose a highly structured neural network architecture for semantic segmentation with an extremely small model size, suitable for low-power embedded and mobile platforms.

General Classification Semantic Segmentation

Discrete Deep Feature Extraction: A Theory and New Architectures

no code implementations26 May 2016 Thomas Wiatowski, Michael Tschannen, Aleksandar Stanić, Philipp Grohs, Helmut Bölcskei

First steps towards a mathematical theory of deep convolutional neural networks for feature extraction were made---for the continuous-time case---in Mallat, 2012, and Wiatowski and B\"olcskei, 2015.

Facial Landmark Detection Feature Importance +2

Pursuits in Structured Non-Convex Matrix Factorizations

no code implementations12 Feb 2016 Rajiv Khanna, Michael Tschannen, Martin Jaggi

Efficiently representing real world data in a succinct and parsimonious manner is of central importance in many fields.

Dimensionality-reduced subspace clustering

no code implementations25 Jul 2015 Reinhard Heckel, Michael Tschannen, Helmut Bölcskei

Subspace clustering refers to the problem of clustering unlabeled high-dimensional data points into a union of low-dimensional linear subspaces, whose number, orientations, and dimensions are all unknown.

Dimensionality Reduction

Nonparametric Nearest Neighbor Random Process Clustering

no code implementations20 Apr 2015 Michael Tschannen, Helmut Bölcskei

We consider the problem of clustering noisy finite-length observations of stationary ergodic random processes according to their nonparametric generative models without prior knowledge of the model statistics and the number of generative models.

Subspace clustering of dimensionality-reduced data

no code implementations27 Apr 2014 Reinhard Heckel, Michael Tschannen, Helmut Bölcskei

Subspace clustering refers to the problem of clustering unlabeled high-dimensional data points into a union of low-dimensional linear subspaces, assumed unknown.

Dimensionality Reduction

Cannot find the paper you are looking for? You can Submit a new open access paper.