no code implementations • ICML 2020 • Dmytro Perekrestenko, Stephan Müller, Helmut Bölcskei
We present an explicit deep network construction that transforms uniformly distributed one-dimensional noise into an arbitrarily close approximation of any two-dimensional target distribution of finite differential entropy and Lipschitz-continuous pdf.
no code implementations • 8 Apr 2024 • Yani Zhang, Helmut Bölcskei
We develop a theory characterizing the fundamental capability of deep neural networks to learn, from evolution traces, the logical rules governing the behavior of cellular automata (CA).
no code implementations • 22 Jan 2024 • Yani Zhang, Helmut Bölcskei
An algorithm for extracting formulae in MV logic from deep ReLU networks is presented.
no code implementations • 26 Jul 2021 • Dmytro Perekrestenko, Léandre Eberhard, Helmut Bölcskei
We show that every $d$-dimensional probability distribution of bounded support can be generated through deep ReLU networks out of a $1$-dimensional uniform input distribution.
no code implementations • 6 May 2021 • Clemens Hutter, Recep Gül, Helmut Bölcskei
One of the most influential results in neural network theory is the universal approximation theorem [1, 2, 3] which states that continuous functions can be approximated to within arbitrary accuracy by single-hidden-layer feedforward neural networks.
no code implementations • 30 Jun 2020 • Dmytro Perekrestenko, Stephan Müller, Helmut Bölcskei
We present an explicit deep neural network construction that transforms uniformly distributed one-dimensional noise into an arbitrarily close approximation of any two-dimensional Lipschitz-continuous target distribution.
no code implementations • 21 Jun 2020 • Verner Vlačić, Helmut Bölcskei
In an effort to answer the identifiability question in greater generality, we consider arbitrary nonlinearities with potentially complicated affine symmetries, and we show that the symmetries can be used to find a rich set of networks giving rise to the same function $f$.
no code implementations • 11 Jun 2019 • Verner Vlačić, Helmut Bölcskei
In an effort to answer the identifiability question in greater generality, we derive necessary genericity conditions for the identifiability of neural networks of arbitrary depth and connectivity with an arbitrary nonlinearity.
no code implementations • 8 Jan 2019 • Dennis Elbrächter, Dmytro Perekrestenko, Philipp Grohs, Helmut Bölcskei
This paper develops fundamental limits of deep neural network learning by characterizing what is possible if no constraints are imposed on the learning algorithm and on the amount of training data.
no code implementations • ICLR 2019 • Dmytro Perekrestenko, Philipp Grohs, Dennis Elbrächter, Helmut Bölcskei
We show that finite-width deep ReLU neural networks yield rate-distortion optimal approximation (B\"olcskei et al., 2018) of polynomials, windowed sinusoidal functions, one-dimensional oscillatory textures, and the Weierstrass function, a fractal function which is continuous but nowhere differentiable.
no code implementations • 10 Jul 2017 • Thomas Wiatowski, Philipp Grohs, Helmut Bölcskei
Finally, for networks based on Weyl-Heisenberg filters, we determine the prototype function bandwidth that minimizes---for fixed network depth $N$---the average number of operationally significant nodes per layer.
no code implementations • 4 May 2017 • Helmut Bölcskei, Philipp Grohs, Gitta Kutyniok, Philipp Petersen
Specifically, all function classes that are optimally approximated by a general class of representation systems---so-called \emph{affine systems}---can be approximated by deep neural networks with minimal connectivity and memory requirements.
no code implementations • 12 Apr 2017 • Thomas Wiatowski, Philipp Grohs, Helmut Bölcskei
This paper establishes conditions for energy conservation (and thus for a trivial null-set) for a wide class of deep convolutional neural network-based feature extractors and characterizes corresponding feature map energy decay rates.
no code implementations • 11 Dec 2016 • Michael Tschannen, Helmut Bölcskei
The clustering conditions we obtain for SSC-OMP and SSC-MP are similar to those for SSC and for the thresholding-based subspace clustering (TSC) algorithm due to Heckel and B\"olcskei.
no code implementations • 4 Dec 2016 • Michael Tschannen, Helmut Bölcskei
We consider the problem of clustering noisy finite-length observations of stationary ergodic random processes according to their generative models without prior knowledge of the model statistics and the number of generative models.
no code implementations • 26 May 2016 • Thomas Wiatowski, Michael Tschannen, Aleksandar Stanić, Philipp Grohs, Helmut Bölcskei
First steps towards a mathematical theory of deep convolutional neural networks for feature extraction were made---for the continuous-time case---in Mallat, 2012, and Wiatowski and B\"olcskei, 2015.
no code implementations • 29 Apr 2016 • Philipp Grohs, Thomas Wiatowski, Helmut Bölcskei
Wiatowski and B\"olcskei, 2015, proved that deformation stability and vertical translation invariance of deep convolutional neural network-based feature extractors are guaranteed by the network structure per se rather than the specific convolution kernels and non-linearities.
no code implementations • 19 Dec 2015 • Thomas Wiatowski, Helmut Bölcskei
Deep convolutional neural networks have led to breakthrough results in numerous practical machine learning tasks such as classification of images in the ImageNet data set, control-policy-learning to play Atari games or the board game Go, and image captioning.
no code implementations • 25 Jul 2015 • Reinhard Heckel, Michael Tschannen, Helmut Bölcskei
Subspace clustering refers to the problem of clustering unlabeled high-dimensional data points into a union of low-dimensional linear subspaces, whose number, orientations, and dimensions are all unknown.
no code implementations • 21 Apr 2015 • Thomas Wiatowski, Helmut Bölcskei
Our generalized feature extractor is proven to be translation-invariant, and we develop deformation stability results for a larger class of deformations than those considered by Mallat.
no code implementations • 20 Apr 2015 • Michael Tschannen, Helmut Bölcskei
We consider the problem of clustering noisy finite-length observations of stationary ergodic random processes according to their nonparametric generative models without prior knowledge of the model statistics and the number of generative models.
no code implementations • 27 Apr 2014 • Reinhard Heckel, Michael Tschannen, Helmut Bölcskei
Subspace clustering refers to the problem of clustering unlabeled high-dimensional data points into a union of low-dimensional linear subspaces, assumed unknown.
no code implementations • 13 Mar 2014 • Reinhard Heckel, Eirikur Agustsson, Helmut Bölcskei
Subspace clustering refers to the problem of clustering high-dimensional data points into a union of low-dimensional linear subspaces, where the number of subspaces, their dimensions and orientations are all unknown.
no code implementations • 13 Nov 2013 • Alexander Jung, Reinhard Heckel, Helmut Bölcskei, Franz Hlawatsch
We propose a method for inferring the conditional indepen- dence graph (CIG) of a high-dimensional discrete-time Gaus- sian vector random process from finite-length observations.
1 code implementation • 18 Jul 2013 • Reinhard Heckel, Helmut Bölcskei
We propose a simple low-complexity subspace clustering algorithm, which applies spectral clustering to an adjacency matrix obtained by thresholding the correlations between data points.
no code implementations • 15 May 2013 • Reinhard Heckel, Helmut Bölcskei
We consider the problem of clustering noisy high-dimensional data points into a union of low-dimensional subspaces and a set of outliers.