Search Results for author: Yura Malitsky

Found 10 papers, 3 papers with code

Adaptive Proximal Gradient Method for Convex Optimization

1 code implementation4 Aug 2023 Yura Malitsky, Konstantin Mishchenko

In this paper, we explore two fundamental first-order algorithms in convex optimization, namely, gradient descent (GD) and proximal gradient method (ProxGD).

Beyond the Golden Ratio for Variational Inequality Algorithms

no code implementations28 Dec 2022 Ahmet Alacaoglu, Axel Böhm, Yura Malitsky

We improve the understanding of the $\textit{golden ratio algorithm}$, which solves monotone variational inequalities (VI) and convex-concave min-max problems via the distinctive feature of adapting the step sizes to the local Lipschitz constants.

Convergence of adaptive algorithms for constrained weakly convex optimization

no code implementations NeurIPS 2021 Ahmet Alacaoglu, Yura Malitsky, Volkan Cevher

We analyze the adaptive first order algorithm AMSGrad, for solving a constrained stochastic optimization problem with a weakly convex objective.

Stochastic Optimization

A first-order primal-dual method with adaptivity to local smoothness

no code implementations NeurIPS 2021 Maria-Luiza Vladarean, Yura Malitsky, Volkan Cevher

We consider the problem of finding a saddle point for the convex-concave objective $\min_x \max_y f(x) + \langle Ax, y\rangle - g^*(y)$, where $f$ is a convex function with locally Lipschitz gradient and $g$ is convex and possibly non-smooth.

Stochastic Variance Reduction for Variational Inequality Methods

1 code implementation16 Feb 2021 Ahmet Alacaoglu, Yura Malitsky

We propose stochastic variance reduced algorithms for solving convex-concave saddle point problems, monotone variational inequalities, and monotone inclusions.

Convergence of adaptive algorithms for weakly convex constrained optimization

no code implementations11 Jun 2020 Ahmet Alacaoglu, Yura Malitsky, Volkan Cevher

We analyze the adaptive first order algorithm AMSGrad, for solving a constrained stochastic optimization problem with a weakly convex objective.

Stochastic Optimization

Adaptive Gradient Descent without Descent

1 code implementation ICML 2020 Yura Malitsky, Konstantin Mishchenko

We present a strikingly simple proof that two rules are sufficient to automate gradient descent: 1) don't increase the stepsize too fast and 2) don't overstep the local curvature.

Revisiting Stochastic Extragradient

no code implementations27 May 2019 Konstantin Mishchenko, Dmitry Kovalev, Egor Shulgin, Peter Richtárik, Yura Malitsky

We fix a fundamental issue in the stochastic extragradient method by providing a new sampling strategy that is motivated by approximating implicit updates.

Model Function Based Conditional Gradient Method with Armijo-like Line Search

no code implementations23 Jan 2019 Yura Malitsky, Peter Ochs

The Conditional Gradient Method is generalized to a class of non-smooth non-convex optimization problems with many applications in machine learning.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.