Search Results for author: Belhal Karimi

Found 13 papers, 0 papers with code

On Distributed Adaptive Optimization with Gradient Compression

no code implementations ICLR 2022 Xiaoyun Li, Belhal Karimi, Ping Li

We study COMP-AMS, a distributed optimization framework based on gradient averaging and adaptive AMSGrad algorithm.

Distributed Optimization

Joint learning of object graph and relation graph for visual question answering

no code implementations9 May 2022 Hao Li, Xu Li, Belhal Karimi, Jie Chen, Mingming Sun

Modeling visual question answering(VQA) through scene graphs can significantly improve the reasoning accuracy and interpretability.

Question Answering Visual Question Answering +1

A Class of Two-Timescale Stochastic EM Algorithms for Nonconvex Latent Variable Models

no code implementations18 Mar 2022 Belhal Karimi, Ping Li

We motivate the choice of a double dynamic by invoking the variance reduction virtue of each stage of the method on both sources of noise: the index sampling for the incremental update and the MC approximation.

Fed-LAMB: Layerwise and Dimensionwise Locally Adaptive Optimization Algorithm

no code implementations1 Oct 2021 Belhal Karimi, Xiaoyun Li, Ping Li

In the emerging paradigm of federated learning (FL), large amount of clients, such as mobile devices, are used to train possibly high-dimensional models on their respective data.

Federated Learning

On the Convergence of Decentralized Adaptive Gradient Methods

no code implementations7 Sep 2021 Xiangyi Chen, Belhal Karimi, Weijie Zhao, Ping Li

Adaptive gradient methods including Adam, AdaGrad, and their variants have been very successful for training deep learning models, such as neural networks.

Distributed Computing Distributed Optimization

Convergent Adaptive Gradient Methods in Decentralized Optimization

no code implementations1 Jan 2021 Xiangyi Chen, Belhal Karimi, Weijie Zhao, Ping Li

Specifically, we propose a general algorithmic framework that can convert existing adaptive gradient methods to their decentralized counterparts.

Distributed Optimization

MISSO: Minimization by Incremental Stochastic Surrogate Optimization for Large Scale Nonconvex and Nonsmooth Problems

no code implementations1 Jan 2021 Belhal Karimi, Hoi To Wai, Eric Moulines, Ping Li

Many constrained, nonconvex and nonsmooth optimization problems can be tackled using the majorization-minimization (MM) method which alternates between constructing a surrogate function which upper bounds the objective function, and then minimizing this surrogate.

Variational Inference

Towards Better Generalization of Adaptive Gradient Methods

no code implementations NeurIPS 2020 Yingxue Zhou, Belhal Karimi, Jinxing Yu, Zhiqiang Xu, Ping Li

Adaptive gradient methods such as AdaGrad, RMSprop and Adam have been optimizers of choice for deep learning due to their fast training speed.

HWA: Hyperparameters Weight Averaging in Bayesian Neural Networks

no code implementations pproximateinference AABI Symposium 2021 Belhal Karimi, Ping Li

Bayesian neural networks attempt to combine the strong predictive performance of neural networks with formal quantification of uncertainty of the predicted output in the Bayesian framework.

FedSKETCH: Communication-Efficient and Private Federated Learning via Sketching

no code implementations11 Aug 2020 Farzin Haddadpour, Belhal Karimi, Ping Li, Xiaoyun Li

Communication complexity and privacy are the two key challenges in Federated Learning where the goal is to perform a distributed learning through a large volume of devices.

Federated Learning

On the Global Convergence of (Fast) Incremental Expectation Maximization Methods

no code implementations NeurIPS 2019 Belhal Karimi, Hoi-To Wai, Eric Moulines, Marc Lavielle

To alleviate this problem, Neal and Hinton have proposed an incremental version of the EM (iEM) in which at each iteration the conditional expectation of the latent data (E-step) is updated only for a mini-batch of observations.

An Optimistic Acceleration of AMSGrad for Nonconvex Optimization

no code implementations ICLR 2020 Jun-Kun Wang, Xiaoyun Li, Belhal Karimi, Ping Li

We propose a new variant of AMSGrad, a popular adaptive gradient based optimization algorithm widely used for training deep neural networks.

online learning

Non-asymptotic Analysis of Biased Stochastic Approximation Scheme

no code implementations2 Feb 2019 Belhal Karimi, Blazej Miasojedow, Eric Moulines, Hoi-To Wai

We illustrate these settings with the online EM algorithm and the policy-gradient method for average reward maximization in reinforcement learning.


Cannot find the paper you are looking for? You can Submit a new open access paper.