no code implementations • 6 May 2021 • Peter Hinz
For fixed training data and network parameters in the other layers the L1 loss of a ReLU neural network as a function of the first layer's parameters is a piece-wise affine function.
no code implementations • 31 Mar 2021 • Peter Hinz
Several current bounds on the maximal number of affine regions of a ReLU feed-forward neural network are special cases of the framework [1] which relies on layer-wise activation histogram bounds.
1 code implementation • 27 Nov 2020 • Peter Hinz, Sara van de Geer
Feed-forward ReLU neural networks partition their input domain into finitely many "affine regions" of constant neuron activation pattern and affine behaviour.
Optimization and Control
no code implementations • 17 Jan 2019 • Dominik Alfke, Weston Baines, Jan Blechschmidt, Mauricio J. del Razo Sarmina, Amnon Drory, Dennis Elbrächter, Nando Farchmin, Matteo Gambara, Silke Glas, Philipp Grohs, Peter Hinz, Danijel Kivaranovic, Christian Kümmerle, Gitta Kutyniok, Sebastian Lunz, Jan Macdonald, Ryan Malthaner, Gregory Naisat, Ariel Neufeld, Philipp Christian Petersen, Rafael Reisenhofer, Jun-Da Sheng, Laura Thesing, Philipp Trunschke, Johannes von Lindheim, David Weber, Melanie Weber
We present a novel technique based on deep learning and set theory which yields exceptional classification and prediction results.
no code implementations • 5 Jun 2018 • Peter Hinz, Sara van de Geer
More precisely, the information about the number regions per dimensionality is pushed through the layers starting with one region of the input dimension of the neural network and using a recursion based on an analysis of how many regions per output dimensionality a subsequent layer with a certain width can induce on an input region with a given dimensionality.