1 code implementation • 6 Dec 2023 • Pierre Wolinski

We consider a gradient-based optimization method applied to a function $\mathcal{L}$ of a vector of variables $\boldsymbol{\theta}$, in the case where $\boldsymbol{\theta}$ is represented as a tuple of tensors $(\mathbf{T}_1, \cdots, \mathbf{T}_S)$.

no code implementations • 20 Nov 2023 • Minh Tri Lê, Pierre Wolinski, Julyan Arbel

It then explores MEMS-based applications on ultra-low power MCUs, highlighting their potential for enabling TinyML on resource-constrained devices.

no code implementations • NeurIPS 2023 • Michael Arbel, Romain Menegaux, Pierre Wolinski

This work studies the global convergence and implicit bias of Gauss Newton's (GN) when optimizing over-parameterized one-hidden layer networks in the mean-field regime.

1 code implementation • 24 May 2022 • Pierre Wolinski, Julyan Arbel

The study of feature propagation at initialization in neural networks lies at the root of numerous initialization designs.

no code implementations • 1 Feb 2020 • Pierre Wolinski, Guillaume Charpiat, Yann Ollivier

We fully characterize the regularizers that can arise according to this procedure, and provide a systematic way to compute the prior corresponding to a given penalty.

1 code implementation • 2 Oct 2018 • Léonard Blier, Pierre Wolinski, Yann Ollivier

Hyperparameter tuning is a bothersome step in the training of deep learning models.

no code implementations • 27 Sep 2018 • Léonard Blier, Pierre Wolinski, Yann Ollivier

Hyperparameter tuning is a bothersome step in the training of deep learning mod- els.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.