Search Results for author: François Ged

Found 4 papers, 1 papers with code

Matryoshka Policy Gradient for Entropy-Regularized RL: Convergence and Global Optimality

no code implementations22 Mar 2023 François Ged, Maria Han Veiga

A novel Policy Gradient (PG) algorithm, called Matryoshka Policy Gradient (MPG), is introduced and studied, in the context of max-entropy reinforcement learning, where an agent aims at maximising entropy bonuses additional to its cumulative rewards.

Saddle-to-Saddle Dynamics in Deep Linear Networks: Small Initialization Training, Symmetry, and Sparsity

no code implementations30 Jun 2021 Arthur Jacot, François Ged, Berfin Şimşek, Clément Hongler, Franck Gabriel

The dynamics of Deep Linear Networks (DLNs) is dramatically affected by the variance $\sigma^2$ of the parameters at initialization $\theta_0$.

L2 Regularization

Geometry of the Loss Landscape in Overparameterized Neural Networks: Symmetries and Invariances

1 code implementation25 May 2021 Berfin Şimşek, François Ged, Arthur Jacot, Francesco Spadaro, Clément Hongler, Wulfram Gerstner, Johanni Brea

For a two-layer overparameterized network of width $ r^*+ h =: m $ we explicitly describe the manifold of global minima: it consists of $ T(r^*, m) $ affine subspaces of dimension at least $ h $ that are connected to one another.

Order and Chaos: NTK views on DNN Normalization, Checkerboard and Boundary Artifacts

no code implementations11 Jul 2019 Arthur Jacot, Franck Gabriel, François Ged, Clément Hongler

Moving the network into the chaotic regime prevents checkerboard patterns; we propose a graph-based parametrization which eliminates border artifacts; finally, we introduce a new layer-dependent learning rate to improve the convergence of DC-NNs.

Cannot find the paper you are looking for? You can Submit a new open access paper.