Search Results for author: Quoc Tran-Dinh

Found 37 papers, 7 papers with code

Shuffling Momentum Gradient Algorithm for Convex Optimization

no code implementations5 Mar 2024 Trang H. Tran, Quoc Tran-Dinh, Lam M. Nguyen

The Stochastic Gradient Descent method (SGD) and its stochastic variants have become methods of choice for solving finite-sum optimization problems arising from machine learning and data science thanks to their ability to handle large-scale applications and big datasets.

Sublinear Convergence Rates of Extragradient-Type Methods: A Survey on Classical and Recent Developments

no code implementations30 Mar 2023 Quoc Tran-Dinh

The extragradient (EG), introduced by G. M. Korpelevich in 1976, is a well-known method to approximate solutions of saddle-point problems and their extensions such as variational inequalities and monotone inclusions.

Extragradient-Type Methods with $\mathcal{O} (1/k)$ Last-Iterate Convergence Rates for Co-Hypomonotone Inclusions

no code implementations8 Feb 2023 Quoc Tran-Dinh

We develop two "Nesterov's accelerated" variants of the well-known extragradient method to approximate a solution of a co-hypomonotone inclusion constituted by the sum of two operators, where one is Lipschitz continuous and the other is possibly multivalued.

Randomized Block-Coordinate Optimistic Gradient Algorithms for Root-Finding Problems

no code implementations8 Jan 2023 Quoc Tran-Dinh

In this paper, we develop two new randomized block-coordinate optimistic gradient algorithms to approximate a solution of nonlinear equations in large-scale settings, which are called root-finding problems.

Federated Learning

Gradient Descent-Type Methods: Background and Simple Unified Convergence Analysis

no code implementations19 Dec 2022 Quoc Tran-Dinh, Marten van Dijk

In this book chapter, we briefly describe the main components that constitute the gradient descent method and its accelerated and stochastic variants.

Vocal Bursts Type Prediction

Halpern-Type Accelerated and Splitting Algorithms For Monotone Inclusions

no code implementations15 Oct 2021 Quoc Tran-Dinh, Yang Luo

In this paper, we develop a new type of accelerated algorithms to solve some classes of maximally monotone equations as well as monotone inclusions.

Vocal Bursts Type Prediction

FedDR -- Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization

1 code implementation5 Mar 2021 Quoc Tran-Dinh, Nhan H. Pham, Dzung T. Phan, Lam M. Nguyen

These new algorithms can handle statistical and system heterogeneity, which are the two main challenges in federated learning, while achieving the best known communication complexity.

Federated Learning

SMG: A Shuffling Gradient-Based Method with Momentum

no code implementations24 Nov 2020 Trang H. Tran, Lam M. Nguyen, Quoc Tran-Dinh

When the shuffling strategy is fixed, we develop another new algorithm that is similar to existing momentum methods, and prove the same convergence rates for this algorithm under the $L$-smoothness and bounded gradient assumptions.

Convergence Analysis of Homotopy-SGD for non-convex optimization

no code implementations20 Nov 2020 Matilde Gargiani, Andrea Zanelli, Quoc Tran-Dinh, Moritz Diehl, Frank Hutter

In this work, we present a first-order stochastic algorithm based on a combination of homotopy methods and SGD, called Homotopy-Stochastic Gradient Descent (H-SGD), which finds interesting connections with some proposed heuristics in the literature, e. g. optimization by Gaussian continuation, training by diffusion, mollifying networks.

Hogwild! over Distributed Local Data Sets with Linearly Increasing Mini-Batch Sizes

no code implementations27 Oct 2020 Marten van Dijk, Nhuong V. Nguyen, Toan N. Nguyen, Lam M. Nguyen, Quoc Tran-Dinh, Phuong Ha Nguyen

We consider big data analysis where training data is distributed among local data sets in a heterogeneous way -- and we wish to move SGD computations to local compute nodes where local data resides.

An Optimal Hybrid Variance-Reduced Algorithm for Stochastic Composite Nonconvex Optimization

no code implementations20 Aug 2020 Deyi Liu, Lam M. Nguyen, Quoc Tran-Dinh

In this note we propose a new variant of the hybrid variance-reduced proximal gradient method in [7] to solve a common stochastic composite nonconvex optimization problem under standard assumptions.

Hybrid Variance-Reduced SGD Algorithms For Nonconvex-Concave Minimax Problems

no code implementations NeurIPS 2020 Quoc Tran-Dinh, Deyi Liu, Lam M. Nguyen

This problem class has several computational challenges due to its nonsmoothness, nonconvexity, nonlinearity, and non-separability of the objective functions.

A New Randomized Primal-Dual Algorithm for Convex Optimization with Optimal Last Iterate Rates

no code implementations3 Mar 2020 Quoc Tran-Dinh, Deyi Liu

We develop a novel unified randomized block-coordinate primal-dual algorithm to solve a class of nonsmooth constrained convex optimization problems, which covers different existing variants and model settings from the literature.

A Hybrid Stochastic Policy Gradient Algorithm for Reinforcement Learning

1 code implementation1 Mar 2020 Nhan H. Pham, Lam M. Nguyen, Dzung T. Phan, Phuong Ha Nguyen, Marten van Dijk, Quoc Tran-Dinh

We propose a novel hybrid stochastic policy gradient estimator by combining an unbiased policy gradient estimator, the REINFORCE estimator, with another biased one, an adapted SARAH estimator for policy optimization.

reinforcement-learning Reinforcement Learning (RL)

A Newton Frank-Wolfe Method for Constrained Self-Concordant Minimization

1 code implementation17 Feb 2020 Deyi Liu, Volkan Cevher, Quoc Tran-Dinh

We demonstrate how to scalably solve a class of constrained self-concordant minimization problems using linear minimization oracles (LMO) over the constraint set.

Experimental Design

Stochastic Gauss-Newton Algorithms for Nonconvex Compositional Optimization

1 code implementation ICML 2020 Quoc Tran-Dinh, Nhan H. Pham, Lam M. Nguyen

In the expectation case, we establish $\mathcal{O}(\varepsilon^{-2})$ iteration-complexity to achieve a stationary point in expectation and estimate the total number of stochastic oracle calls for both function value and its Jacobian, where $\varepsilon$ is a desired accuracy.

A Hybrid Stochastic Optimization Framework for Stochastic Composite Nonconvex Optimization

no code implementations8 Jul 2019 Quoc Tran-Dinh, Nhan H. Pham, Dzung T. Phan, Lam M. Nguyen

We introduce a new approach to develop stochastic optimization algorithms for a class of stochastic composite and possibly nonconvex optimization problems.

Stochastic Optimization

Hybrid Stochastic Gradient Descent Algorithms for Stochastic Nonconvex Optimization

no code implementations15 May 2019 Quoc Tran-Dinh, Nhan H. Pham, Dzung T. Phan, Lam M. Nguyen

We introduce a hybrid stochastic estimator to design stochastic gradient algorithms for solving stochastic optimization problems.

Stochastic Optimization

Non-Stationary First-Order Primal-Dual Algorithms with Fast NonErgodic Convergence Rates

1 code implementation13 Mar 2019 Quoc Tran-Dinh, Yuzixuan Zhu

By adapting the parameters, we can obtain up to $o(\frac{1}{k})$ convergence rate on the primal objective residuals in nonergodic sense.

Optimization and Control 90C25, 90-08

ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite Nonconvex Optimization

1 code implementation15 Feb 2019 Nhan H. Pham, Lam M. Nguyen, Dzung T. Phan, Quoc Tran-Dinh

We also specify the algorithm to the non-composite case that covers existing state-of-the-arts in terms of complexity bounds.

Smooth Primal-Dual Coordinate Descent Algorithms for Nonsmooth Convex Optimization

no code implementations NeurIPS 2017 Ahmet Alacaoglu, Quoc Tran-Dinh, Olivier Fercoq, Volkan Cevher

We propose a new randomized coordinate descent method for a convex optimization template with broad applications.

Sieve-SDP: a simple facial reduction algorithm to preprocess semidefinite programs

1 code implementation24 Oct 2017 Yuzixuan, Zhu, Gabor Pataki, Quoc Tran-Dinh

We introduce Sieve-SDP, a simple algorithm to preprocess semidefinite programs (SDPs).

Optimization and Control 90-08, 90C22 (Primary) 90C25, 90C06 ( secondary)

Extended Gauss-Newton and ADMM-Gauss-Newton Algorithms for Low-Rank Matrix Optimization

no code implementations10 Jun 2016 Quoc Tran-Dinh

In this paper, we develop a variant of the well-known Gauss-Newton (GN) method to solve a class of nonconvex optimization problems involving low-rank matrix variables.

Matrix Completion

Convex block-sparse linear regression with expanders -- provably

no code implementations21 Mar 2016 Anastasios Kyrillidis, Bubacarr Bah, Rouzbeh Hasheminezhad, Quoc Tran-Dinh, Luca Baldassarre, Volkan Cevher

Our experimental findings on synthetic and real applications support our claims for faster recovery in the convex setting -- as opposed to using dense sensing matrices, while showing a competitive recovery performance.

regression

A single-phase, proximal path-following framework

no code implementations5 Mar 2016 Quoc Tran-Dinh, Anastasios Kyrillidis, Volkan Cevher

First, it allows handling non-smooth objectives via proximal operators; this avoids lifting the problem dimension in order to accommodate non-smooth components in optimization.

Adaptive Smoothing Algorithms for Nonsmooth Composite Convex Minimization

no code implementations1 Sep 2015 Quoc Tran-Dinh

We propose an adaptive smoothing algorithm based on Nesterov's smoothing technique in \cite{Nesterov2005c} for solving "fully" nonsmooth composite convex optimization problems.

Structured Sparsity: Discrete and Convex approaches

no code implementations20 Jul 2015 Anastasios Kyrillidis, Luca Baldassarre, Marwa El-Halabi, Quoc Tran-Dinh, Volkan Cevher

For each, we present the models in their discrete nature, discuss how to solve the ensuing discrete problems and then describe convex relaxations.

Compressive Sensing

Smooth Alternating Direction Methods for Nonsmooth Constrained Convex Optimization

no code implementations14 Jul 2015 Quoc Tran-Dinh, Volkan Cevher

We propose two new alternating direction methods to solve "fully" nonsmooth constrained convex problems.

Composite convex minimization involving self-concordant-like cost functions

no code implementations4 Feb 2015 Quoc Tran-Dinh, Yen-Huan Li, Volkan Cevher

The self-concordant-like property of a smooth convex function is a new analytical structure that generalizes the self-concordant notion.

Constrained convex minimization via model-based excessive gap

no code implementations NeurIPS 2014 Quoc Tran-Dinh, Volkan Cevher

We introduce a model-based excessive gap technique to analyze first-order primal- dual methods for constrained convex minimization.

A Primal-Dual Algorithmic Framework for Constrained Convex Minimization

no code implementations20 Jun 2014 Quoc Tran-Dinh, Volkan Cevher

Our main analysis technique provides a fresh perspective on Nesterov's excessive gap technique in a structured fashion and unifies it with smoothing and primal-dual methods.

Scalable sparse covariance estimation via self-concordance

no code implementations13 May 2014 Anastasios Kyrillidis, Rabeeh Karimi Mahabadi, Quoc Tran-Dinh, Volkan Cevher

We consider the class of convex minimization problems, composed of a self-concordant function, such as the $\log\det$ metric, a convex data fidelity term $h(\cdot)$ and, a regularizing -- possibly non-smooth -- function $g(\cdot)$.

Composite Self-Concordant Minimization

no code implementations13 Aug 2013 Quoc Tran-Dinh, Anastasios Kyrillidis, Volkan Cevher

We propose a variable metric framework for minimizing the sum of a self-concordant function and a possibly non-smooth convex function, endowed with an easily computable proximal operator.

Cannot find the paper you are looking for? You can Submit a new open access paper.