Search Results for author: Mathias Staudigl

Found 11 papers, 2 papers with code

Self-concordant analysis of Frank-Wolfe algorithm

1 code implementation ICML 2020 Mathias Staudigl, Pavel Dvurechenskii, Shimrit Shtern, Kamil Safin, Petr Ostroukhov

Projection-free optimization via different variants of the Frank-Wolfe (FW) method has become one of the cornerstones in optimization for machine learning since in many cases the linear minimization oracle is much cheaper to implement than projections and some sparsity needs to be preserved.

Quantum State Tomography

A conditional gradient homotopy method with applications to Semidefinite Programming

no code implementations7 Jul 2022 Pavel Dvurechensky, Shimrit Shtern, Mathias Staudigl

We propose a new homotopy-based conditional gradient method for solving convex optimization problems with a large number of simple conic constraints.

Combinatorial Optimization

A privacy-preserving distributed computational approach for distributed locational marginal prices

no code implementations25 Mar 2021 Olivier Bilenne, Paulin Jacquot, Nadia Oudjane, Mathias Staudigl, Cheng Wan, Barbara Franci

An important issue in today's electricity markets is the management of flexibilities offered by new practices, such as smart home appliances or electric vehicles.

Management Privacy Preserving

First-Order Methods for Convex Optimization

no code implementations4 Jan 2021 Pavel Dvurechensky, Mathias Staudigl, Shimrit Shtern

In this survey we cover a number of key developments in gradient-based optimization methods.

Computing Dynamic User Equilibrium on Large-Scale Networks Without Knowing Global Parameters

no code implementations8 Oct 2020 Duong Viet Thong, Aviv Gibali, Mathias Staudigl, Phan Tu Vuong

Dynamic user equilibrium (DUE) is a Nash-like solution concept describing an equilibrium in dynamic traffic systems over a fixed planning period.

Generalized Self-concordant Hessian-barrier algorithms

no code implementations4 Nov 2019 Pavel Dvurechensky, Mathias Staudigl, César A. Uribe

Many problems in statistical learning, imaging, and computer vision involve the optimization of a non-convex objective function with singularities at the boundary of the feasible set.

Forward-backward-forward methods with variance reduction for stochastic variational inequalities

no code implementations9 Feb 2019 Radu Ioan Bot, Panayotis Mertikopoulos, Mathias Staudigl, Phan Tu Vuong

We develop a new stochastic algorithm with variance reduction for solving pseudo-monotone stochastic variational inequalities.

Hessian barrier algorithms for linearly constrained optimization problems

no code implementations25 Sep 2018 Immanuel M. Bomze, Panayotis Mertikopoulos, Werner Schachinger, Mathias Staudigl

In the case of linearly constrained quadratic programs (not necessarily convex), we also show that the method's convergence rate is $\mathcal{O}(1/k^\rho)$ for some $\rho\in(0, 1]$ that depends only on the choice of kernel function (i. e., not on the problem's primitives).

Multi-agent online learning in time-varying games

no code implementations10 Sep 2018 Benoit Duvocelle, Panayotis Mertikopoulos, Mathias Staudigl, Dries Vermeulen

We examine the long-run behavior of multi-agent online learning in games that evolve over time.

On the convergence of gradient-like flows with noisy gradient input

no code implementations21 Nov 2016 Panayotis Mertikopoulos, Mathias Staudigl

In the vanishing noise limit, we show that the dynamics converge to the solution set of the underlying problem (a. s.).

Cannot find the paper you are looking for? You can Submit a new open access paper.