Search Results for author: Simon Wiedemann

Found 13 papers, 5 papers with code

A Deep Learning Method for Simultaneous Denoising and Missing Wedge Reconstruction in Cryogenic Electron Tomography

1 code implementation9 Nov 2023 Simon Wiedemann, Reinhard Heckel

At the same time, DeepDeWedge is simpler than this two-step approach, as it does denoising and missing wedge reconstruction simultaneously rather than sequentially.

Cryogenic Electron Tomography Denoising +1

FantastIC4: A Hardware-Software Co-Design Approach for Efficiently Running 4bit-Compact Multilayer Perceptrons

no code implementations17 Dec 2020 Simon Wiedemann, Suhas Shivapakash, Pablo Wiedemann, Daniel Becking, Wojciech Samek, Friedel Gerfers, Thomas Wiegand

With the growing demand for deploying deep learning models to the "edge", it is paramount to develop techniques that allow to execute state-of-the-art models within very tight and limited resource constraints.

Quantization

Learning Sparse & Ternary Neural Networks with Entropy-Constrained Trained Ternarization (EC2T)

2 code implementations2 Apr 2020 Arturo Marban, Daniel Becking, Simon Wiedemann, Wojciech Samek

To address this problem, we propose Entropy-Constrained Trained Ternarization (EC2T), a general framework to create sparse and ternary neural networks which are efficient in terms of storage (e. g., at most two binary-masks and two full-precision values are required to save a weight matrix) and computation (e. g., MAC operations are reduced to a few accumulations plus two multiplications).

Image Classification

Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning

1 code implementation18 Dec 2019 Seul-Ki Yeom, Philipp Seegerer, Sebastian Lapuschkin, Alexander Binder, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek

The success of convolutional neural networks (CNNs) in various applications is accompanied by a significant increase in computation and parameter storage costs.

Explainable Artificial Intelligence (XAI) Model Compression +2

DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks

1 code implementation27 Jul 2019 Simon Wiedemann, Heiner Kirchoffer, Stefan Matlage, Paul Haase, Arturo Marban, Talmaj Marinc, David Neumann, Tung Nguyen, Ahmed Osman, Detlev Marpe, Heiko Schwarz, Thomas Wiegand, Wojciech Samek

The field of video compression has developed some of the most sophisticated and efficient compression algorithms known in the literature, enabling very high compressibility for little loss of information.

Neural Network Compression Quantization +1

Robust and Communication-Efficient Federated Learning from Non-IID Data

1 code implementation7 Mar 2019 Felix Sattler, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek

Federated Learning allows multiple parties to jointly train a deep learning model on their combined data, without any of the participants having to reveal their local data to a centralized server.

Federated Learning Privacy Preserving

Entropy-Constrained Training of Deep Neural Networks

no code implementations18 Dec 2018 Simon Wiedemann, Arturo Marban, Klaus-Robert Müller, Wojciech Samek

We propose a general framework for neural network compression that is motivated by the Minimum Description Length (MDL) principle.

Neural Network Compression

Compact and Computationally Efficient Representations of Deep Neural Networks

no code implementations NIPS Workshop CDNNRIA 2018 Simon Wiedemann, Klaus-Robert Mueller, Wojciech Samek

However, most of these common matrix storage formats make strong statistical assumptions about the distribution of the elements in the matrix, and can therefore not efficiently represent the entire set of matrices that exhibit low entropy statistics (thus, the entire set of compressed neural network weight matrices).

Compact and Computationally Efficient Representation of Deep Neural Networks

no code implementations27 May 2018 Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek

These new matrix formats have the novel property that their memory and algorithmic complexity are implicitly bounded by the entropy of the matrix, consequently implying that they are guaranteed to become more efficient as the entropy of the matrix is being reduced.

Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication

no code implementations22 May 2018 Felix Sattler, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek

A major issue in distributed training is the limited communication bandwidth between contributing nodes or prohibitive communication cost in general.

Binarization

Cannot find the paper you are looking for? You can Submit a new open access paper.