2 code implementations • 25 Oct 2022 • Pavol Harar, Dennis Elbrächter, Monika Dörfler, Kory D. Johnson
When given independent and identically distributed samples of some random variable $S$ and the continuous cumulative distribution function of some desired target $T$, it provably produces a consistent estimator of the transformation $R$ which satisfies $R(S)=T$ in distribution.
no code implementations • NeurIPS 2019 • Julius Berner, Dennis Elbrächter, Philipp Grohs
Approximation capabilities of neural networks can be used to deal with the latter non-convexity, which allows us to establish that for sufficiently large networks local minima of a regularized optimization problem on the realization space are almost optimal.
no code implementations • 13 May 2019 • Julius Berner, Dennis Elbrächter, Philipp Grohs, Arnulf Jentzen
Although for neural networks with locally Lipschitz continuous activation functions the classical derivative exists almost everywhere, the standard chain rule is in general not applicable.
no code implementations • 17 Jan 2019 • Dominik Alfke, Weston Baines, Jan Blechschmidt, Mauricio J. del Razo Sarmina, Amnon Drory, Dennis Elbrächter, Nando Farchmin, Matteo Gambara, Silke Glas, Philipp Grohs, Peter Hinz, Danijel Kivaranovic, Christian Kümmerle, Gitta Kutyniok, Sebastian Lunz, Jan Macdonald, Ryan Malthaner, Gregory Naisat, Ariel Neufeld, Philipp Christian Petersen, Rafael Reisenhofer, Jun-Da Sheng, Laura Thesing, Philipp Trunschke, Johannes von Lindheim, David Weber, Melanie Weber
We present a novel technique based on deep learning and set theory which yields exceptional classification and prediction results.
no code implementations • 8 Jan 2019 • Dennis Elbrächter, Dmytro Perekrestenko, Philipp Grohs, Helmut Bölcskei
This paper develops fundamental limits of deep neural network learning by characterizing what is possible if no constraints are imposed on the learning algorithm and on the amount of training data.
no code implementations • ICLR 2019 • Dmytro Perekrestenko, Philipp Grohs, Dennis Elbrächter, Helmut Bölcskei
We show that finite-width deep ReLU neural networks yield rate-distortion optimal approximation (B\"olcskei et al., 2018) of polynomials, windowed sinusoidal functions, one-dimensional oscillatory textures, and the Weierstrass function, a fractal function which is continuous but nowhere differentiable.