Search Results for author: Jalal Fadili

Found 15 papers, 2 papers with code

Recovery Guarantees of Unsupervised Neural Networks for Inverse Problems trained with Gradient Descent

no code implementations8 Mar 2024 Nathan Buskulic, Jalal Fadili, Yvain Quéau

Advanced machine learning methods, and more prominently neural networks, have become standard to solve inverse problems over the last years.

Convergence and Recovery Guarantees of Unsupervised Neural Networks for Inverse Problems

no code implementations21 Sep 2023 Nathan Buskulic, Jalal Fadili, Yvain Quéau

Neural networks have become a prominent approach to solve inverse problems in recent years.

Convergence Guarantees of Overparametrized Wide Deep Inverse Prior

no code implementations20 Mar 2023 Nathan Buskulic, Yvain Quéau, Jalal Fadili

Neural networks have become a prominent approach to solve inverse problems in recent years.

Provable Phase Retrieval with Mirror Descent

no code implementations17 Oct 2022 Jean-Jacques Godeme, Jalal Fadili, Xavier Buet, Myriam Zerrad, Michel Lequime, Claude Amra

In this paper, we consider the problem of phase retrieval, which consists of recovering an $n$-dimensional real vector from the magnitude of its $m$ linear measurements.

Retrieval

A Stochastic Bregman Primal-Dual Splitting Algorithm for Composite Optimization

no code implementations22 Dec 2021 Antonio Silveti-Falls, Cesare Molinari, Jalal Fadili

Under slightly stricter assumptions, we show almost sure weak convergence of the pointwise iterates to a saddle point.

Global Convergence of Model Function Based Bregman Proximal Minimization Algorithms

no code implementations24 Dec 2020 Mahesh Chandra Mukkamala, Jalal Fadili, Peter Ochs

We fix this issue by proposing the MAP property, which generalizes the $L$-smad property and is also valid for a large class of nonconvex nonsmooth composite problems.

Retrieval valid

Inexact and Stochastic Generalized Conditional Gradient with Augmented Lagrangian and Proximal Step

no code implementations11 May 2020 Antonio Silveti-Falls, Cesare Molinari, Jalal Fadili

In this paper we propose and analyze inexact and stochastic versions of the CGALP algorithm developed in the authors' previous paper, which we denote ICGALP, that allows for errors in the computation of several important quantities.

Wasserstein Control of Mirror Langevin Monte Carlo

no code implementations11 Feb 2020 Kelvin Shuangjian Zhang, Gabriel Peyré, Jalal Fadili, Marcelo Pereyra

In this paper, we consider Langevin diffusions on a Hessian-type manifold and study a discretization that is closely related to the mirror-descent scheme.

Learning CHARME models with neural networks

1 code implementation8 Feb 2020 José G. Gómez García, Jalal Fadili, Christophe Chesneau

In this paper, we consider a model called CHARME (Conditional Heteroscedastic Autoregressive Mixture of Experts), a class of generalized mixture of nonlinear nonparametric AR-ARCH time series.

Learning Theory Time Series +1

Sensitivity Analysis for Mirror-Stratifiable Convex Functions

1 code implementation11 Jul 2017 Jalal Fadili, Jérôme Malick, Gabriel Peyré

This pairing is crucial to track the strata that are identifiable by solutions of parametrized optimization problems or by iterates of optimization algorithms.

Sparse Support Recovery with Non-smooth Loss Functions

no code implementations NeurIPS 2016 Kévin Degraux, Gabriel Peyré, Jalal Fadili, Laurent Jacques

More precisely, we focus in detail on the cases of $\ell_1$ and $\ell_\infty$ losses, and contrast them with the usual $\ell_2$ loss. While these losses are routinely used to account for either sparse ($\ell_1$ loss) or uniform ($\ell_\infty$ loss) noise models, a theoretical analysis of their performance is still lacking.

A Multi-step Inertial Forward-Backward Splitting Method for Non-convex Optimization

no code implementations NeurIPS 2016 Jingwei Liang, Jalal Fadili, Gabriel Peyré

In this paper, we propose a multi-step inertial Forward--Backward splitting algorithm for minimizing the sum of two non-necessarily convex functions, one of which is proper lower semi-continuous while the other is differentiable with a Lipschitz continuous gradient.

BIG-bench Machine Learning

Local Linear Convergence of Forward--Backward under Partial Smoothness

no code implementations NeurIPS 2014 Jingwei Liang, Jalal Fadili, Gabriel Peyré

In this paper, we consider the Forward--Backward proximal splitting algorithm to minimize the sum of two proper closed convex functions, one of which having a Lipschitz continuous gradient and the other being partly smooth relatively to an active manifold $\mathcal{M}$.

Astronomical Image Denoising Using Dictionary Learning

no code implementations12 Apr 2013 Simon Beckouche, Jean-Luc Starck, Jalal Fadili

Astronomical images suffer a constant presence of multiple defects that are consequences of the intrinsic properties of the acquisition equipments, and atmospheric conditions.

Dictionary Learning Image Denoising

Cannot find the paper you are looking for? You can Submit a new open access paper.