1 code implementation • 30 Sep 2023 • Sjoerd Dirksen, Patrick Finke, Martin Genzel
In practice, deep neural networks are often able to easily interpolate their training data.
no code implementations • NeurIPS 2023 • Sjoerd Dirksen, Martin Genzel, Laurent Jacques, Alexander Stollenwerk
Neural networks with random weights appear in a variety of machine learning applications, most prominently as the initialization of many deep learning algorithms and as a computationally cheap alternative to fully learned neural networks.
no code implementations • 8 Jul 2020 • Simon Veldkamp, Kirien Whan, Sjoerd Dirksen, Maurice Schmeits
Current statistical post-processing methods for probabilistic weather forecasting are not capable of using full spatial patterns from the numerical weather prediction (NWP) model.
no code implementations • 17 Feb 2014 • Sjoerd Dirksen
We present a theory for Euclidean dimensionality reduction with subgaussian matrices which unifies several restricted isometry property and Johnson-Lindenstrauss type results obtained earlier for specific data sets.
no code implementations • 11 Nov 2013 • Jean Bourgain, Sjoerd Dirksen, Jelani Nelson
Let $\Phi\in\mathbb{R}^{m\times n}$ be a sparse Johnson-Lindenstrauss transform [KN14] with $s$ non-zeroes per column.