no code implementations • 4 Aug 2021 • Pascal Bianchi, Walid Hachem, Sholom Schechtman
Consequently, generically in the class of definable weakly convex functions, the SGD converges to a local minimizer.
no code implementations • 14 Jun 2021 • Anas Barakat, Pascal Bianchi, Julien Lehmann
Actor-critic methods integrating target networks have exhibited a stupendous empirical success in deep reinforcement learning.
no code implementations • 23 Jun 2020 • Pascal Bianchi, Kevin Elgui, François Portier
This paper introduces the \textit{weighted partial copula} function for testing conditional independence.
no code implementations • 18 Nov 2019 • Anas Barakat, Pascal Bianchi
In this work, we study the ADAM algorithm for smooth nonconvex optimization under a boundedness assumption on the adaptive learning rate.
no code implementations • 25 Sep 2019 • Anas Barakat, Pascal Bianchi
In this work, we study the algorithm for smooth nonconvex optimization under a boundedness assumption on the adaptive learning rate.
no code implementations • 23 Jan 2019 • Pascal Bianchi, Walid Hachem, Adil Salim
The proposed algorithm is proven to converge to a saddle point of the Lagrangian function.
no code implementations • 4 Oct 2018 • Anas Barakat, Pascal Bianchi
In the constant stepsize regime, assuming that the objective function is differentiable and non-convex, we establish the convergence in the long run of the iterates to a stationary point under a stability condition.
no code implementations • 3 Apr 2018 • Adil Salim, Pascal Bianchi, Walid Hachem
The Douglas Rachford algorithm is an algorithm that converges to a minimizer of a sum of two convex functions.
no code implementations • 19 Dec 2017 • Adil Salim, Pascal Bianchi, Walid Hachem
When applying the proximal gradient algorithm to solve this problem, there exist quite affordable methods to implement the proximity operator (backward step) in the special case where the graph is a simple path without loops.