no code implementations • 9 Sep 2024 • Mehdi Zafari, Divyanshu Pandey, Rahman Doost-Mohammady, César A. Uribe
On the other hand, distributed approaches either require data exchange over the network that scales with the number of antennas or solve the problem for cellular systems where every user is served by only one AP.
no code implementations • 5 Sep 2024 • Miguel F. Arevalo-Castiblanco, Eduardo Mojica-Nava and, César A. Uribe
The leader and followers may differ from the reference model in which the RL control policy was trained.
no code implementations • 18 Jun 2024 • Alex G. Zalles, Kai M. Hung, Ann E. Finneran, Lydia Beaudrot, César A. Uribe
We study the problem of network regression, where one is interested in how the topology of a network changes as a function of Euclidean covariates.
no code implementations • 26 Mar 2024 • Ashwin Aravind, Mohammad Taha Toghani, César A. Uribe
We study the problem of policy estimation for the Linear Quadratic Regulator (LQR) in discrete-time linear time-invariant uncertain dynamical systems.
no code implementations • 7 Mar 2024 • Ivan Lau, Shiqian Ma, César A. Uribe
Moreover, we propose the decentralized equitable optimal transport (DE-OT) problem.
no code implementations • 25 Feb 2024 • Tam Nguyen, César A. Uribe, Tan M. Nguyen, Richard G. Baraniuk
Motivated by this control framework, we derive a novel class of transformers, PID-controlled Transformer (PIDformer), aimed at improving robustness and mitigating the rank-collapse issue inherent in softmax transformers.
no code implementations • 27 Nov 2023 • Delaram Pirhayatifard, Mohammad Taha Toghani, Guha Balakrishnan, César A. Uribe
In this work, we address the challenge of multi-task image generation with limited data for denoising diffusion probabilistic models (DDPM), a class of generative models that produce high-quality images by reversing a noisy diffusion process.
no code implementations • 14 Nov 2023 • Bohan Wu, César A. Uribe
Motivated by the need to analyze large, decentralized datasets, distributed Bayesian inference has become a critical research area across multiple fields, including statistics, electrical engineering, and economics.
1 code implementation • 19 Jun 2023 • Junhyung Lyle Kim, Mohammad Taha Toghani, César A. Uribe, Anastasios Kyrillidis
Federated learning (FL) is a distributed machine learning framework where the global model of a central server is trained via multiple collaborative steps by participating clients without sharing their data.
no code implementations • 20 May 2023 • Mohammad Taha Toghani, Sebastian Perez-Salazar, César A. Uribe
We provide a detailed analysis of the MEMRL algorithm, where we show a sublinear convergence rate to a first-order stationary point for non-convex policy gradient optimization.
no code implementations • 4 Dec 2022 • Johanna Castellanos, Carlos Adrian Correa-Florez, Alejandro Garcés, Gabriel Ordóñez-Plata, César A. Uribe, Diego Patino
This paper proposes a convex optimization model of an energy management system with operational and power quality constraints and interactions in a Local Energy Market (LEM) for unbalanced microgrids (MGs).
no code implementations • 10 Oct 2022 • Edward Duc Hien Nguyen, Sulaiman A. Alghunaim, Kun Yuan, César A. Uribe
We study the decentralized optimization problem where a network of $n$ agents seeks to minimize the average of a set of heterogeneous non-convex cost functions distributedly.
no code implementations • 9 Oct 2022 • YuAn Wang, Sebin Gracy, César A. Uribe, Hideaki Ishii, Karl Henrik Johansson
The upshot of devising such a strategy is that it allows health administration officials to ensure that there is sufficient capacity in the healthcare system to treat the most severe cases.
no code implementations • 3 Oct 2022 • Mohammad Taha Toghani, Soomin Lee, César A. Uribe
Our main technical contribution is a unified proof for asynchronous federated learning with bounded staleness that we apply to MAML and ME personalization frameworks.
no code implementations • 3 Oct 2022 • Mohammad Taha Toghani, César A. Uribe
Synchronous updates may compromise the efficiency of cross-device federated learning once the number of active clients increases.
no code implementations • 20 Jun 2022 • Nishant Mehrotra, Ashutosh Sabharwal, César A. Uribe
This paper takes the first steps toward enabling wireless networks to perform both imaging and communication in a distributed manner.
no code implementations • 18 Apr 2022 • Mohammad Taha Toghani, César A. Uribe
We study the decentralized consensus and stochastic optimization problems with compressed communications over static directed graphs.
no code implementations • 16 Apr 2022 • Ekaterina Trimbach, Edward Duc Hien Nguyen, César A. Uribe
We study the acceleration of the Local Polynomial Interpolation-based Gradient Descent method (LPI-GD) recently proposed for the approximate solution of empirical risk minimization problems (ERM).
no code implementations • 22 Mar 2022 • Junhyung Lyle Kim, Mohammad Taha Toghani, César A. Uribe, Anastasios Kyrillidis
We propose a distributed Quantum State Tomography (QST) protocol, named Local Stochastic Factored Gradient Descent (Local SFGD), to learn the low-rank factor of a density matrix over a set of local machines.
no code implementations • 14 Mar 2022 • Tiancheng Qin, S. Rasoul Etesami, César A. Uribe
Our main contribution is to characterize the convergence rate of Local SGD as a function of $\{H_i\}_{i=1}^R$ under various settings of strongly convex, convex, and nonconvex local functions, where $R$ is the total number of communication rounds.
no code implementations • 30 Jan 2022 • Tiancheng Qin, S. Rasoul Etesami, César A. Uribe
For general convex loss functions, we establish an error bound of $\O(1/T)$ under a mild data similarity assumption and an error bound of $\O(K/T)$ otherwise, where $K$ is the number of local steps and $T$ is the total number of iterations.
no code implementations • 14 Sep 2021 • Mohammad Taha Toghani, César A. Uribe
We propose a new decentralized average consensus algorithm with compressed communication that scales linearly with the network size n. We prove that the proposed method converges to the average of the initial values held locally by the agents of a network when agents are allowed to communicate with compressed messages.
no code implementations • 16 Feb 2021 • Pavel Dvurechensky, Dmitry Kamzolov, Aleksandr Lukashevich, Soomin Lee, Erik Ordentlich, César A. Uribe, Alexander Gasnikov
Statistical preconditioning enables fast methods for distributed large-scale empirical risk minimization problems.
Distributed Optimization
Optimization and Control
no code implementations • 14 Feb 2021 • Mohammad Taha Toghani, César A. Uribe
We study the problem of distributed cooperative learning, where a group of agents seeks to agree on a set of hypotheses that best describes a sequence of private observations.
no code implementations • 6 Nov 2020 • Tiancheng Qin, S. Rasoul Etesami, César A. Uribe
Agents have access to $F$ through noisy gradients, and they can locally communicate with their neighbors a network.
no code implementations • 20 Oct 2020 • Eduardo Mojica-Nava, David Yanguas-Rojas, César A. Uribe
We consider the model of cooperative learning via distributed non-Bayesian learning, where a network of agents tries to jointly agree on a hypothesis that best described a sequence of locally available observations.
no code implementations • 7 Jul 2020 • César A. Uribe, Ali Jadbabaie
We propose a distributed, cubic-regularized Newton method for large-scale convex optimization over networks.
no code implementations • 4 Nov 2019 • Pavel Dvurechensky, Mathias Staudigl, César A. Uribe
Many problems in statistical learning, imaging, and computer vision involve the optimization of a non-convex objective function with singularities at the boundary of the feasible set.
no code implementations • 3 Sep 2018 • César A. Uribe, Soomin Lee, Alexander Gasnikov, Angelia Nedić
Then, we study distributed optimization algorithms for non-dual friendly functions, as well as a method to improve the dependency on the parameters of the functions involved.
no code implementations • 8 Mar 2018 • César A. Uribe, Darina Dvinskikh, Pavel Dvurechensky, Alexander Gasnikov, Angelia Nedić
We propose a new \cu{class-optimal} algorithm for the distributed computation of Wasserstein Barycenters over networks.
no code implementations • 1 Dec 2017 • César A. Uribe, Soomin Lee, Alexander Gasnikov, Angelia Nedić
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks.
no code implementations • 10 Apr 2017 • Angelia Nedić, Alex Olshevsky, César A. Uribe
We study the problem of cooperative inference where a group of agents interact over a network and seek to estimate a joint parameter that best explains a set of observations.
no code implementations • 6 Dec 2016 • Angelia Nedić, Alex Olshevsky, César A. Uribe
We show a convergence rate of $O(1/k)$ with the constant term depending on the number of agents and the topology of the network.
no code implementations • 23 Sep 2016 • Angelia Nedić, Alex Olshevsky, César A. Uribe
We overview some results on distributed learning with focus on a family of recently proposed algorithms known as non-Bayesian social learning.
no code implementations • 19 Sep 2016 • Angelia Nedić, Alex Olshevsky, Wei Shi, César A. Uribe
A recent algorithmic family for distributed optimization, DIGing's, have been shown to have geometric convergence over time-varying undirected/directed graphs.