Search Results for author: Helmut Bölcskei

Found 26 papers, 1 papers with code

Constructive universal distribution generation through deep ReLU networks

no code implementations ICML 2020 Dmytro Perekrestenko, Stephan Müller, Helmut Bölcskei

We present an explicit deep network construction that transforms uniformly distributed one-dimensional noise into an arbitrarily close approximation of any two-dimensional target distribution of finite differential entropy and Lipschitz-continuous pdf.

Cellular automata, many-valued logic, and deep neural networks

no code implementations8 Apr 2024 Yani Zhang, Helmut Bölcskei

We develop a theory characterizing the fundamental capability of deep neural networks to learn, from evolution traces, the logical rules governing the behavior of cellular automata (CA).

Extracting Formulae in Many-Valued Logic from Deep Neural Networks

no code implementations22 Jan 2024 Yani Zhang, Helmut Bölcskei

An algorithm for extracting formulae in MV logic from deep ReLU networks is presented.

High-Dimensional Distribution Generation Through Deep Neural Networks

no code implementations26 Jul 2021 Dmytro Perekrestenko, Léandre Eberhard, Helmut Bölcskei

We show that every $d$-dimensional probability distribution of bounded support can be generated through deep ReLU networks out of a $1$-dimensional uniform input distribution.

Quantization Vocal Bursts Intensity Prediction

Metric Entropy Limits on Recurrent Neural Network Learning of Linear Dynamical Systems

no code implementations6 May 2021 Clemens Hutter, Recep Gül, Helmut Bölcskei

One of the most influential results in neural network theory is the universal approximation theorem [1, 2, 3] which states that continuous functions can be approximated to within arbitrary accuracy by single-hidden-layer feedforward neural networks.

Constructive Universal High-Dimensional Distribution Generation through Deep ReLU Networks

no code implementations30 Jun 2020 Dmytro Perekrestenko, Stephan Müller, Helmut Bölcskei

We present an explicit deep neural network construction that transforms uniformly distributed one-dimensional noise into an arbitrarily close approximation of any two-dimensional Lipschitz-continuous target distribution.

Vocal Bursts Intensity Prediction

Affine symmetries and neural network identifiability

no code implementations21 Jun 2020 Verner Vlačić, Helmut Bölcskei

In an effort to answer the identifiability question in greater generality, we consider arbitrary nonlinearities with potentially complicated affine symmetries, and we show that the symmetries can be used to find a rich set of networks giving rise to the same function $f$.

Neural network identifiability for a family of sigmoidal nonlinearities

no code implementations11 Jun 2019 Verner Vlačić, Helmut Bölcskei

In an effort to answer the identifiability question in greater generality, we derive necessary genericity conditions for the identifiability of neural networks of arbitrary depth and connectivity with an arbitrary nonlinearity.

Deep Neural Network Approximation Theory

no code implementations8 Jan 2019 Dennis Elbrächter, Dmytro Perekrestenko, Philipp Grohs, Helmut Bölcskei

This paper develops fundamental limits of deep neural network learning by characterizing what is possible if no constraints are imposed on the learning algorithm and on the amount of training data.

Handwritten Digit Recognition Image Classification +1

The universal approximation power of finite-width deep ReLU networks

no code implementations ICLR 2019 Dmytro Perekrestenko, Philipp Grohs, Dennis Elbrächter, Helmut Bölcskei

We show that finite-width deep ReLU neural networks yield rate-distortion optimal approximation (B\"olcskei et al., 2018) of polynomials, windowed sinusoidal functions, one-dimensional oscillatory textures, and the Weierstrass function, a fractal function which is continuous but nowhere differentiable.

Topology Reduction in Deep Convolutional Feature Extraction Networks

no code implementations10 Jul 2017 Thomas Wiatowski, Philipp Grohs, Helmut Bölcskei

Finally, for networks based on Weyl-Heisenberg filters, we determine the prototype function bandwidth that minimizes---for fixed network depth $N$---the average number of operationally significant nodes per layer.

Optimal Approximation with Sparsely Connected Deep Neural Networks

no code implementations4 May 2017 Helmut Bölcskei, Philipp Grohs, Gitta Kutyniok, Philipp Petersen

Specifically, all function classes that are optimally approximated by a general class of representation systems---so-called \emph{affine systems}---can be approximated by deep neural networks with minimal connectivity and memory requirements.

Energy Propagation in Deep Convolutional Neural Networks

no code implementations12 Apr 2017 Thomas Wiatowski, Philipp Grohs, Helmut Bölcskei

This paper establishes conditions for energy conservation (and thus for a trivial null-set) for a wide class of deep convolutional neural network-based feature extractors and characterizes corresponding feature map energy decay rates.

Noisy subspace clustering via matching pursuits

no code implementations11 Dec 2016 Michael Tschannen, Helmut Bölcskei

The clustering conditions we obtain for SSC-OMP and SSC-MP are similar to those for SSC and for the thresholding-based subspace clustering (TSC) algorithm due to Heckel and B\"olcskei.

Clustering

Robust nonparametric nearest neighbor random process clustering

no code implementations4 Dec 2016 Michael Tschannen, Helmut Bölcskei

We consider the problem of clustering noisy finite-length observations of stationary ergodic random processes according to their generative models without prior knowledge of the model statistics and the number of generative models.

Clustering

Discrete Deep Feature Extraction: A Theory and New Architectures

no code implementations26 May 2016 Thomas Wiatowski, Michael Tschannen, Aleksandar Stanić, Philipp Grohs, Helmut Bölcskei

First steps towards a mathematical theory of deep convolutional neural networks for feature extraction were made---for the continuous-time case---in Mallat, 2012, and Wiatowski and B\"olcskei, 2015.

Facial Landmark Detection Feature Importance +2

Deep Convolutional Neural Networks on Cartoon Functions

no code implementations29 Apr 2016 Philipp Grohs, Thomas Wiatowski, Helmut Bölcskei

Wiatowski and B\"olcskei, 2015, proved that deformation stability and vertical translation invariance of deep convolutional neural network-based feature extractors are guaranteed by the network structure per se rather than the specific convolution kernels and non-linearities.

Translation

A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction

no code implementations19 Dec 2015 Thomas Wiatowski, Helmut Bölcskei

Deep convolutional neural networks have led to breakthrough results in numerous practical machine learning tasks such as classification of images in the ImageNet data set, control-policy-learning to play Atari games or the board game Go, and image captioning.

Atari Games Image Captioning +1

Dimensionality-reduced subspace clustering

no code implementations25 Jul 2015 Reinhard Heckel, Michael Tschannen, Helmut Bölcskei

Subspace clustering refers to the problem of clustering unlabeled high-dimensional data points into a union of low-dimensional linear subspaces, whose number, orientations, and dimensions are all unknown.

Clustering Dimensionality Reduction

Deep Convolutional Neural Networks Based on Semi-Discrete Frames

no code implementations21 Apr 2015 Thomas Wiatowski, Helmut Bölcskei

Our generalized feature extractor is proven to be translation-invariant, and we develop deformation stability results for a larger class of deformations than those considered by Mallat.

Translation

Nonparametric Nearest Neighbor Random Process Clustering

no code implementations20 Apr 2015 Michael Tschannen, Helmut Bölcskei

We consider the problem of clustering noisy finite-length observations of stationary ergodic random processes according to their nonparametric generative models without prior knowledge of the model statistics and the number of generative models.

Clustering

Subspace clustering of dimensionality-reduced data

no code implementations27 Apr 2014 Reinhard Heckel, Michael Tschannen, Helmut Bölcskei

Subspace clustering refers to the problem of clustering unlabeled high-dimensional data points into a union of low-dimensional linear subspaces, assumed unknown.

Clustering Dimensionality Reduction

Neighborhood Selection for Thresholding-based Subspace Clustering

no code implementations13 Mar 2014 Reinhard Heckel, Eirikur Agustsson, Helmut Bölcskei

Subspace clustering refers to the problem of clustering high-dimensional data points into a union of low-dimensional linear subspaces, where the number of subspaces, their dimensions and orientations are all unknown.

Clustering

Compressive Nonparametric Graphical Model Selection For Time Series

no code implementations13 Nov 2013 Alexander Jung, Reinhard Heckel, Helmut Bölcskei, Franz Hlawatsch

We propose a method for inferring the conditional indepen- dence graph (CIG) of a high-dimensional discrete-time Gaus- sian vector random process from finite-length observations.

Model Selection Time Series +1

Robust Subspace Clustering via Thresholding

1 code implementation18 Jul 2013 Reinhard Heckel, Helmut Bölcskei

We propose a simple low-complexity subspace clustering algorithm, which applies spectral clustering to an adjacency matrix obtained by thresholding the correlations between data points.

Clustering

Noisy Subspace Clustering via Thresholding

no code implementations15 May 2013 Reinhard Heckel, Helmut Bölcskei

We consider the problem of clustering noisy high-dimensional data points into a union of low-dimensional subspaces and a set of outliers.

Clustering Outlier Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.