Search Results for author: Peter Hinz

Found 5 papers, 1 papers with code

The layer-wise L1 Loss Landscape of Neural Nets is more complex around local minima

no code implementations6 May 2021 Peter Hinz

For fixed training data and network parameters in the other layers the L1 loss of a ReLU neural network as a function of the first layer's parameters is a piece-wise affine function.

Using activation histograms to bound the number of affine regions in ReLU feed-forward neural networks

no code implementations31 Mar 2021 Peter Hinz

Several current bounds on the maximal number of affine regions of a ReLU feed-forward neural network are special cases of the framework [1] which relies on layer-wise activation histogram bounds.

Deep ReLU Programming

1 code implementation27 Nov 2020 Peter Hinz, Sara van de Geer

Feed-forward ReLU neural networks partition their input domain into finitely many "affine regions" of constant neuron activation pattern and affine behaviour.

Optimization and Control

A Framework for the construction of upper bounds on the number of affine linear regions of ReLU feed-forward neural networks

no code implementations5 Jun 2018 Peter Hinz, Sara van de Geer

More precisely, the information about the number regions per dimensionality is pushed through the layers starting with one region of the input dimension of the neural network and using a recursion based on an analysis of how many regions per output dimensionality a subsequent layer with a certain width can induce on an input region with a given dimensionality.

Cannot find the paper you are looking for? You can Submit a new open access paper.