no code implementations • NeurIPS 2023 • Vincent Froese, Christoph Hertrich
We also answer a question by Froese et al. [JAIR '22] proving W[1]-hardness for four ReLUs (or two linear threshold neurons) with zero training error.
no code implementations • 24 Feb 2023 • Christian Haase, Christoph Hertrich, Georg Loho
We prove that the set of functions representable by ReLU neural networks with integer weights strictly increases with the network depth while allowing arbitrary width.
no code implementations • NeurIPS 2023 • Daniel Bertschinger, Christoph Hertrich, Paul Jungeblut, Tillmann Miltzow, Simon Weber
We consider the problem of finding weights and biases for a two-layer fully connected neural network to fit a given set of data points as well as possible, also known as EmpiricalRiskMinimization.
1 code implementation • NeurIPS 2021 • Christoph Hertrich, Amitabh Basu, Marco Di Summa, Martin Skutella
We contribute to a better understanding of the class of functions that can be represented by a neural network with ReLU activations and a given architecture.
no code implementations • 18 May 2021 • Vincent Froese, Christoph Hertrich, Rolf Niedermeier
In particular, we extend a known polynomial-time algorithm for constant $d$ and convex loss functions to a more general class of loss functions, matching our running time lower bounds also in these cases.
no code implementations • 12 Feb 2021 • Christoph Hertrich, Leon Sering
This paper studies the expressive power of artificial neural networks with rectified linear units.
1 code implementation • 28 May 2020 • Christoph Hertrich, Martin Skutella
The development of a satisfying and rigorous mathematical understanding of the performance of neural networks is a major challenge in artificial intelligence.