Search Results for author: Pierre Wolinski

Found 7 papers, 3 papers with code

Adapting Newton's Method to Neural Networks through a Summary of Higher-Order Derivatives

1 code implementation6 Dec 2023 Pierre Wolinski

We consider a gradient-based optimization method applied to a function $\mathcal{L}$ of a vector of variables $\boldsymbol{\theta}$, in the case where $\boldsymbol{\theta}$ is represented as a tuple of tensors $(\mathbf{T}_1, \cdots, \mathbf{T}_S)$.

Second-order methods

Efficient Neural Networks for Tiny Machine Learning: A Comprehensive Review

no code implementations20 Nov 2023 Minh Tri Lê, Pierre Wolinski, Julyan Arbel

It then explores MEMS-based applications on ultra-low power MCUs, highlighting their potential for enabling TinyML on resource-constrained devices.

Model Compression Quantization

Rethinking Gauss-Newton for learning over-parameterized models

no code implementations NeurIPS 2023 Michael Arbel, Romain Menegaux, Pierre Wolinski

This work studies the global convergence and implicit bias of Gauss Newton's (GN) when optimizing over-parameterized one-hidden layer networks in the mean-field regime.

Gaussian Pre-Activations in Neural Networks: Myth or Reality?

1 code implementation24 May 2022 Pierre Wolinski, Julyan Arbel

The study of feature propagation at initialization in neural networks lies at the root of numerous initialization designs.

An Equivalence between Bayesian Priors and Penalties in Variational Inference

no code implementations1 Feb 2020 Pierre Wolinski, Guillaume Charpiat, Yann Ollivier

We fully characterize the regularizers that can arise according to this procedure, and provide a systematic way to compute the prior corresponding to a given penalty.

Variational Inference

Learning with Random Learning Rates

1 code implementation2 Oct 2018 Léonard Blier, Pierre Wolinski, Yann Ollivier

Hyperparameter tuning is a bothersome step in the training of deep learning models.

Learning with Random Learning Rates.

no code implementations27 Sep 2018 Léonard Blier, Pierre Wolinski, Yann Ollivier

Hyperparameter tuning is a bothersome step in the training of deep learning mod- els.

Cannot find the paper you are looking for? You can Submit a new open access paper.