no code implementations • 5 Feb 2024 • Armand Foucault, Franck Mamalet, François Malgouyres
Orthogonal recurrent neural networks (ORNNs) are an appealing option for learning tasks involving time series with long-term dependencies, thanks to their simplicity and computational stability.
no code implementations • 31 Jan 2023 • Mimoun Mohamed, François Malgouyres, Valentin Emiya, Caroline Chaux
We introduce a new algorithm promoting sparsity called {\it Support Exploration Algorithm (SEA)} and analyze it in the context of support recovery/model selection problems. The algorithm can be interpreted as an instance of the {\it straight-through estimator (STE)} applied to the resolution of a sparse linear inverse problem.
no code implementations • 15 Jun 2022 • Joachim Bona-Pellissier, François Malgouyres, François Bachoc
Is a sample rich enough to determine, at least locally, the parameters of a neural network?
no code implementations • 9 Jun 2022 • El Mehdi Achour, Armand Foucault, Sébastien Gerchinovitz, François Malgouyres
Given two sets $F$, $G$ of real-valued functions, we first prove a general lower bound on how well functions in $F$ can be approximated in $L^p(\mu)$ norm by functions in $G$, for any $p \geq 1$ and any probability measure $\mu$.
no code implementations • 24 Dec 2021 • Joachim Bona-Pellissier, François Bachoc, François Malgouyres
The possibility for one to recover the parameters-weights and biases-of a neural network thanks to the knowledge of its function on a subset of the input space can be, depending on the situation, a curse or a blessing.
no code implementations • 12 Aug 2021 • El Mehdi Achour, François Malgouyres, Franck Mamalet
Imposing orthogonality on the layers of neural networks is known to facilitate the learning by limiting the exploding/vanishing of the gradient; decorrelate the features; improve the robustness.
no code implementations • 28 Jul 2021 • El Mehdi Achour, François Malgouyres, Sébastien Gerchinovitz
We characterize, among all critical points, which are global minimizers, strict saddle points, and non-strict saddle points.
no code implementations • 26 Jan 2021 • Adrien Gauffriau, François Malgouyres, Mélanie Ducoffe
Experiments on real data show that the method makes it possible to use the surrogate function in embedded systems for which an underestimation is critical; when computing the reference function requires too many resources.
no code implementations • 23 Mar 2017 • François Malgouyres, Joseph Landsberg
In this paper, we provide necessary and sufficient conditions on the network topology under which a stability property holds.