no code implementations • 8 May 2024 • Xiaomeng Chen, Wei Huo, Kemi Ding, Subhrakanti Dey, Ling Shi
Due to the nature of distributed systems, privacy and communication efficiency are two critical concerns.
no code implementations • 6 May 2024 • Wei Huo, Xiaomeng Chen, Kemi Ding, Subhrakanti Dey, Ling Shi
To jointly address these issues, we propose an algorithm that uses stochastic compression to save communication resources and conceal information through random errors induced by compression.
no code implementations • 25 Mar 2024 • Arunava Naha, Subhrakanti Dey
In this paper, we investigate a model-free optimal control design that minimizes an infinite horizon average expected quadratic cost of states and control actions subject to a probabilistic risk or chance constraint using input-output data.
no code implementations • 11 Mar 2024 • Huiwen Yang, Xiaomeng Chen, Lingying Huang, Subhrakanti Dey, Ling Shi
Over-the-air aggregation has attracted widespread attention for its potential advantages in task-oriented applications, such as distributed sensing, learning, and consensus.
no code implementations • 23 Nov 2023 • Xiaomeng Chen, Wei Huo, Yuchi Wu, Subhrakanti Dey, Ling Shi
We demonstrate that SETC-DNES guarantees linear convergence to the NE while achieving even greater reductions in communication costs compared to ETC-DNES.
no code implementations • 25 Oct 2023 • Huiwen Yang, Lingying Huang, Subhrakanti Dey, Ling Shi
In recent years, over-the-air aggregation has been widely considered in large-scale distributed learning, optimization, and sensing.
no code implementations • 29 Sep 2023 • Alessio Maritan, Subhrakanti Dey, Luca Schenato
Federated learning is a distributed learning framework that allows a set of clients to collaboratively train a model under the orchestration of a central server, without sharing raw data samples.
no code implementations • 25 May 2023 • Arunava Naha, Subhrakanti Dey
Two strategies are employed to manage the probabilistic constraint in scenarios of known and unknown system models.
no code implementations • 18 May 2023 • Nicolò Dal Fabbro, Michele Rossi, Luca Schenato, Subhrakanti Dey
Edge networks call for communication efficient (low overhead) and robust distributed optimization (DO) algorithms.
no code implementations • 13 May 2023 • Alessio Maritan, Ganesh Sharma, Luca Schenato, Subhrakanti Dey
This paper considers the problem of distributed multi-agent learning, where the global aim is to minimize a sum of local objective (empirical loss) functions through local optimization and information exchange between neighbouring nodes.
no code implementations • 27 Mar 2023 • Huiwen Yang, Lingying Huang, Yuzhe Li, Subhrakanti Dey, Ling Shi
In this paper, we consider using simultaneous wireless information and power transfer (SWIPT) to recharge the sensor in the LQG control, which provides a new approach to prolonging the network lifetime.
no code implementations • 11 Feb 2022 • Nicolò Dal Fabbro, Subhrakanti Dey, Michele Rossi, Luca Schenato
There is a growing interest in the distributed optimization framework that goes under the name of Federated Learning (FL).
no code implementations • 9 Jul 2021 • Xiaomeng Chen, Lingying Huang, Lidong He, Subhrakanti Dey, Ling Shi
For privacy preservation, we propose a novel state-decomposition based gradient tracking approach (SD-Push-Pull) for distributed optimzation over directed networks that preserves differential privacy, which is a strong notion that protects agents' privacy against an adversary with arbitrary auxiliary information.
no code implementations • 29 Jan 2021 • Matthias Pezzutto, Luca Schenato, Subhrakanti Dey
In this paper we consider the problem of transmission power allocation for remote estimation of a dynamical system in the case where the estimator is able to simultaneously receive packets from multiple interfering sensors, as it is possible e. g. with the latest wireless technologies such as 5G and WiFi.
no code implementations • 5 Jan 2021 • Arunava Naha, Andre Teixeira, Anders Ahlen, Subhrakanti Dey
An independent and identically distributed watermarking signal is added to the optimal linear quadratic Gaussian (LQG) control inputs, and a cumulative sum (CUSUM) test is carried out using the joint distribution of the innovation signal and the watermarking signal for quickest attack detection.
Optimization and Control
no code implementations • 25 Sep 2020 • Xiaomeng Chen, Lingying Huang, Kemi Ding, Subhrakanti Dey, Ling Shi
That is to say, only the exchanged substate would be visible to an adversary, preventing the initial state information from leakage.