Search Results for author: César A. Uribe

Found 32 papers, 0 papers with code

A Moreau Envelope Approach for LQR Meta-Policy Estimation

no code implementations26 Mar 2024 Ashwin Aravind, Mohammad Taha Toghani, César A. Uribe

We study the problem of policy estimation for the Linear Quadratic Regulator (LQR) in discrete-time linear time-invariant uncertain dynamical systems.

Meta-Learning

Decentralized and Equitable Optimal Transport

no code implementations7 Mar 2024 Ivan Lau, Shiqian Ma, César A. Uribe

Moreover, we propose the decentralized equitable optimal transport (DE-OT) problem.

PIDformer: Transformer Meets Control Theory

no code implementations25 Feb 2024 Tam Nguyen, César A. Uribe, Tan M. Nguyen, Richard G. Baraniuk

Motivated by this control framework, we derive a novel class of transformers, PID-controlled Transformer (PIDformer), aimed at improving robustness and mitigating the rank-collapse issue inherent in softmax transformers.

Image Segmentation Language Modelling +1

Improving Denoising Diffusion Probabilistic Models via Exploiting Shared Representations

no code implementations27 Nov 2023 Delaram Pirhayatifard, Mohammad Taha Toghani, Guha Balakrishnan, César A. Uribe

In this work, we address the challenge of multi-task image generation with limited data for denoising diffusion probabilistic models (DDPM), a class of generative models that produce high-quality images by reversing a noisy diffusion process.

Denoising Few-Shot Learning +2

Frequentist Guarantees of Distributed (Non)-Bayesian Inference

no code implementations14 Nov 2023 Bohan Wu, César A. Uribe

Motivated by the need to analyze large, decentralized datasets, distributed Bayesian inference has become a critical research area across multiple fields, including statistics, electrical engineering, and economics.

Bayesian Inference Electrical Engineering +1

Adaptive Federated Learning with Auto-Tuned Clients

no code implementations19 Jun 2023 Junhyung Lyle Kim, Mohammad Taha Toghani, César A. Uribe, Anastasios Kyrillidis

Federated learning (FL) is a distributed machine learning framework where the global model of a central server is trained via multiple collaborative steps by participating clients without sharing their data.

Federated Learning

On First-Order Meta-Reinforcement Learning with Moreau Envelopes

no code implementations20 May 2023 Mohammad Taha Toghani, Sebastian Perez-Salazar, César A. Uribe

We provide a detailed analysis of the MEMRL algorithm, where we show a sublinear convergence rate to a first-order stationary point for non-convex policy gradient optimization.

Meta Reinforcement Learning reinforcement-learning

An energy management system model with power quality constraints for unbalanced multi-microgrids interacting in a local energy market

no code implementations4 Dec 2022 Johanna Castellanos, Carlos Adrian Correa-Florez, Alejandro Garcés, Gabriel Ordóñez-Plata, César A. Uribe, Diego Patino

This paper proposes a convex optimization model of an energy management system with operational and power quality constraints and interactions in a Local Energy Market (LEM) for unbalanced microgrids (MGs).

energy management Management

On the Performance of Gradient Tracking with Local Updates

no code implementations10 Oct 2022 Edward Duc Hien Nguyen, Sulaiman A. Alghunaim, Kun Yuan, César A. Uribe

We study the decentralized optimization problem where a network of $n$ agents seeks to minimize the average of a set of heterogeneous non-convex cost functions distributedly.

Federated Learning

A State Feedback Controller for Mitigation of Continuous-Time Networked SIS Epidemics

no code implementations9 Oct 2022 YuAn Wang, Sebin Gracy, César A. Uribe, Hideaki Ishii, Karl Henrik Johansson

The upshot of devising such a strategy is that it allows health administration officials to ensure that there is sufficient capacity in the healthcare system to treat the most severe cases.

Unbounded Gradients in Federated Leaning with Buffered Asynchronous Aggregation

no code implementations3 Oct 2022 Mohammad Taha Toghani, César A. Uribe

Synchronous updates may compromise the efficiency of cross-device federated learning once the number of active clients increases.

Federated Learning

PersA-FL: Personalized Asynchronous Federated Learning

no code implementations3 Oct 2022 Mohammad Taha Toghani, Soomin Lee, César A. Uribe

Our main technical contribution is a unified proof for asynchronous federated learning with bounded staleness that we apply to MAML and ME personalization frameworks.

Meta-Learning Personalized Federated Learning

Consensus ADMM-Based Distributed Simultaneous Imaging & Communication

no code implementations20 Jun 2022 Nishant Mehrotra, Ashutosh Sabharwal, César A. Uribe

This paper takes the first steps toward enabling wireless networks to perform both imaging and communication in a distributed manner.

On Arbitrary Compression for Decentralized Consensus and Stochastic Optimization over Directed Networks

no code implementations18 Apr 2022 Mohammad Taha Toghani, César A. Uribe

We study the decentralized consensus and stochastic optimization problems with compressed communications over static directed graphs.

Stochastic Optimization

On Acceleration of Gradient-Based Empirical Risk Minimization using Local Polynomial Regression

no code implementations16 Apr 2022 Ekaterina Trimbach, Edward Duc Hien Nguyen, César A. Uribe

We study the acceleration of the Local Polynomial Interpolation-based Gradient Descent method (LPI-GD) recently proposed for the approximate solution of empirical risk minimization problems (ERM).

regression

Local Stochastic Factored Gradient Descent for Distributed Quantum State Tomography

no code implementations22 Mar 2022 Junhyung Lyle Kim, Mohammad Taha Toghani, César A. Uribe, Anastasios Kyrillidis

We propose a distributed Quantum State Tomography (QST) protocol, named Local Stochastic Factored Gradient Descent (Local SFGD), to learn the low-rank factor of a density matrix over a set of local machines.

Quantum State Tomography

The Role of Local Steps in Local SGD

no code implementations14 Mar 2022 Tiancheng Qin, S. Rasoul Etesami, César A. Uribe

Our main contribution is to characterize the convergence rate of Local SGD as a function of $\{H_i\}_{i=1}^R$ under various settings of strongly convex, convex, and nonconvex local functions, where $R$ is the total number of communication rounds.

Stochastic Optimization

Faster Convergence of Local SGD for Over-Parameterized Models

no code implementations30 Jan 2022 Tiancheng Qin, S. Rasoul Etesami, César A. Uribe

For general convex loss functions, we establish an error bound of $\O(1/T)$ under a mild data similarity assumption and an error bound of $\O(K/T)$ otherwise, where $K$ is the number of local steps.

Scalable Average Consensus with Compressed Communications

no code implementations14 Sep 2021 Mohammad Taha Toghani, César A. Uribe

We propose a new decentralized average consensus algorithm with compressed communication that scales linearly with the network size n. We prove that the proposed method converges to the average of the initial values held locally by the agents of a network when agents are allowed to communicate with compressed messages.

Communication-efficient Distributed Cooperative Learning with Compressed Beliefs

no code implementations14 Feb 2021 Mohammad Taha Toghani, César A. Uribe

We study the problem of distributed cooperative learning, where a group of agents seeks to agree on a set of hypotheses that best describes a sequence of private observations.

Communication-efficient Decentralized Local SGD over Undirected Networks

no code implementations6 Nov 2020 Tiancheng Qin, S. Rasoul Etesami, César A. Uribe

Agents have access to $F$ through noisy gradients, and they can locally communicate with their neighbors a network.

Robust Asynchronous and Network-Independent Cooperative Learning

no code implementations20 Oct 2020 Eduardo Mojica-Nava, David Yanguas-Rojas, César A. Uribe

We consider the model of cooperative learning via distributed non-Bayesian learning, where a network of agents tries to jointly agree on a hypothesis that best described a sequence of locally available observations.

A Distributed Cubic-Regularized Newton Method for Smooth Convex Optimization over Networks

no code implementations7 Jul 2020 César A. Uribe, Ali Jadbabaie

We propose a distributed, cubic-regularized Newton method for large-scale convex optimization over networks.

Federated Learning

Generalized Self-concordant Hessian-barrier algorithms

no code implementations4 Nov 2019 Pavel Dvurechensky, Mathias Staudigl, César A. Uribe

Many problems in statistical learning, imaging, and computer vision involve the optimization of a non-convex objective function with singularities at the boundary of the feasible set.

A Dual Approach for Optimal Algorithms in Distributed Optimization over Networks

no code implementations3 Sep 2018 César A. Uribe, Soomin Lee, Alexander Gasnikov, Angelia Nedić

Then, we study distributed optimization algorithms for non-dual friendly functions, as well as a method to improve the dependency on the parameters of the functions involved.

Distributed Optimization

Distributed Computation of Wasserstein Barycenters over Networks

no code implementations8 Mar 2018 César A. Uribe, Darina Dvinskikh, Pavel Dvurechensky, Alexander Gasnikov, Angelia Nedić

We propose a new \cu{class-optimal} algorithm for the distributed computation of Wasserstein Barycenters over networks.

Optimal Algorithms for Distributed Optimization

no code implementations1 Dec 2017 César A. Uribe, Soomin Lee, Alexander Gasnikov, Angelia Nedić

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks.

Distributed Optimization

Distributed Learning for Cooperative Inference

no code implementations10 Apr 2017 Angelia Nedić, Alex Olshevsky, César A. Uribe

We study the problem of cooperative inference where a group of agents interact over a network and seek to estimate a joint parameter that best explains a set of observations.

Distributed Gaussian Learning over Time-varying Directed Graphs

no code implementations6 Dec 2016 Angelia Nedić, Alex Olshevsky, César A. Uribe

We show a convergence rate of $O(1/k)$ with the constant term depending on the number of agents and the topology of the network.

A Tutorial on Distributed (Non-Bayesian) Learning: Problem, Algorithms and Results

no code implementations23 Sep 2016 Angelia Nedić, Alex Olshevsky, César A. Uribe

We overview some results on distributed learning with focus on a family of recently proposed algorithms known as non-Bayesian social learning.

Geometrically Convergent Distributed Optimization with Uncoordinated Step-Sizes

no code implementations19 Sep 2016 Angelia Nedić, Alex Olshevsky, Wei Shi, César A. Uribe

A recent algorithmic family for distributed optimization, DIGing's, have been shown to have geometric convergence over time-varying undirected/directed graphs.

Distributed Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.