no code implementations • 1 Dec 2021 • Christian Hirsch, Matthias Neumann, Volker Schmidt
A law of large numbers for the empirical distribution of parameters of a one-layer artificial neural networks with sparse connectivity is derived for a simultaneously increasing number of both, neurons and training iterations of the stochastic gradient descent.
no code implementations • 4 Mar 2021 • Christian Hirsch, Benedikt Jahnel, Elie Cali
We study the effects of mobility on two crucial characteristics in multi-scale dynamic networks: percolation and connection times.
Probability Physics and Society Primary 60K35, Secondary 60F10, 82C22
no code implementations • 21 Dec 2016 • Anna Lukina, Lukas Esterle, Christian Hirsch, Ezio Bartocci, Junxing Yang, Ashish Tiwari, Scott A. Smolka, Radu Grosu
Inspired by Importance Splitting, the length of the horizon and the number of particles are chosen such that at least one particle reaches a next-level state, that is, a state where the cost decreases by a required delta from the previous-level state.