Search Results for author: Pascal Bianchi

Found 9 papers, 0 papers with code

Stochastic Subgradient Descent Escapes Active Strict Saddles on Weakly Convex Functions

no code implementations4 Aug 2021 Pascal Bianchi, Walid Hachem, Sholom Schechtman

Consequently, generically in the class of definable weakly convex functions, the SGD converges to a local minimizer.

Stochastic Optimization

Analysis of a Target-Based Actor-Critic Algorithm with Linear Function Approximation

no code implementations14 Jun 2021 Anas Barakat, Pascal Bianchi, Julien Lehmann

Actor-critic methods integrating target networks have exhibited a stupendous empirical success in deep reinforcement learning.

Conditional independence testing via weighted partial copulas and nearest neighbors

no code implementations23 Jun 2020 Pascal Bianchi, Kevin Elgui, François Portier

This paper introduces the \textit{weighted partial copula} function for testing conditional independence.

Convergence Analysis of a Momentum Algorithm with Adaptive Step Size for Non Convex Optimization

no code implementations18 Nov 2019 Anas Barakat, Pascal Bianchi

In this work, we study the ADAM algorithm for smooth nonconvex optimization under a boundedness assumption on the adaptive learning rate.

Convergence Analysis of a Momentum Algorithm with Adaptive Step Size for Nonconvex Optimization

no code implementations25 Sep 2019 Anas Barakat, Pascal Bianchi

In this work, we study the algorithm for smooth nonconvex optimization under a boundedness assumption on the adaptive learning rate.

A Fully Stochastic Primal-Dual Algorithm

no code implementations23 Jan 2019 Pascal Bianchi, Walid Hachem, Adil Salim

The proposed algorithm is proven to converge to a saddle point of the Lagrangian function.

Convergence and Dynamical Behavior of the ADAM Algorithm for Non-Convex Stochastic Optimization

no code implementations4 Oct 2018 Anas Barakat, Pascal Bianchi

In the constant stepsize regime, assuming that the objective function is differentiable and non-convex, we establish the convergence in the long run of the iterates to a stationary point under a stability condition.

Stochastic Optimization

A Constant Step Stochastic Douglas-Rachford Algorithm with Application to Non Separable Regularizations

no code implementations3 Apr 2018 Adil Salim, Pascal Bianchi, Walid Hachem

The Douglas Rachford algorithm is an algorithm that converges to a minimizer of a sum of two convex functions.

Snake: a Stochastic Proximal Gradient Algorithm for Regularized Problems over Large Graphs

no code implementations19 Dec 2017 Adil Salim, Pascal Bianchi, Walid Hachem

When applying the proximal gradient algorithm to solve this problem, there exist quite affordable methods to implement the proximity operator (backward step) in the special case where the graph is a simple path without loops.

Cannot find the paper you are looking for? You can Submit a new open access paper.