Search Results for author: Parvin Nazari

Found 6 papers, 2 papers with code

A Penalty-Based Method for Communication-Efficient Decentralized Bilevel Programming

no code implementations8 Nov 2022 Parvin Nazari, Ahmad Mousavi, Davoud Ataee Tarzanagh, George Michailidis

A key feature of the proposed algorithm is to estimate the hyper-gradient of the penalty function via decentralized computation of matrix-vector products and few vector communications, which is then integrated within an alternating algorithm to obtain finite-time convergence analysis under different convexity assumptions.

Bilevel Optimization Federated Learning

Dynamic Regret of Adaptive Gradient Methods for Strongly Convex Problems

no code implementations4 Sep 2022 Parvin Nazari, Esmaile Khorram

Adaptive gradient algorithms such as ADAGRAD and its variants have gained popularity in the training of deep neural networks.

Online Bilevel Optimization: Regret Analysis of Online Alternating Gradient Methods

1 code implementation6 Jul 2022 Davoud Ataee Tarzanagh, Parvin Nazari, BoJian Hou, Li Shen, Laura Balzano

This paper introduces \textit{online bilevel optimization} in which a sequence of time-varying bilevel problems is revealed one after the other.

Bilevel Optimization

Dynamic Regret Analysis for Online Meta-Learning

no code implementations29 Sep 2021 Parvin Nazari, Esmaile Khorram

The online meta-learning framework has arisen as a powerful tool for the continual lifelong learning setting.

Meta-Learning

Adaptive First-and Zeroth-order Methods for Weakly Convex Stochastic Optimization Problems

no code implementations19 May 2020 Parvin Nazari, Davoud Ataee Tarzanagh, George Michailidis

In this paper, we design and analyze a new family of adaptive subgradient methods for solving an important class of weakly convex (possibly nonsmooth) stochastic optimization problems.

Stochastic Optimization

DADAM: A Consensus-based Distributed Adaptive Gradient Method for Online Optimization

1 code implementation ICLR 2019 Parvin Nazari, Davoud Ataee Tarzanagh, George Michailidis

Adaptive gradient-based optimization methods such as \textsc{Adagrad}, \textsc{Rmsprop}, and \textsc{Adam} are widely used in solving large-scale machine learning problems including deep learning.

Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.