1 code implementation • 16 Nov 2022 • Karl Hajjar, Lenaic Chizat
We consider the idealized setting of gradient flow on the population risk for infinitely wide two-layer ReLU neural networks (without bias), and study the effect of symmetries on the learned parameters and predictors.
1 code implementation • 29 Oct 2021 • Karl Hajjar, Lénaïc Chizat, Christophe Giraud
For two-layer neural networks, it has been understood via these asymptotics that the nature of the trained model radically changes depending on the scale of the initial random weights, ranging from a kernel regime (for large initial variance) to a feature learning regime (for small initial variance).
no code implementations • 7 Jun 2021 • Andrea Nestler, Nour Karessli, Karl Hajjar, Rodrigo Weffer, Reza Shirvany
Size and fit related returns severely impact 1. the customers experience and their dissatisfaction with online shopping, 2. the environment through an increased carbon footprint, and 3. the profitability of online fashion platforms.
2 code implementations • 4 Jul 2018 • Alexandre Laterre, Yunguan Fu, Mohamed Khalil Jabri, Alain-Sam Cohen, David Kas, Karl Hajjar, Torbjorn S. Dahl, Amine Kerkeni, Karim Beguir
Results from applying the R2 algorithm to instances of a two-dimensional and three-dimensional bin packing problems show that it outperforms generic Monte Carlo tree search, heuristic algorithms and integer programming solvers.