Search Results for author: Michal Yemini

Found 12 papers, 0 papers with code

Clipped SGD Algorithms for Privacy Preserving Performative Prediction: Bias Amplification and Remedies

no code implementations17 Apr 2024 Qiang Li, Michal Yemini, Hoi-To Wai

This paper studies the convergence properties of these algorithms in a performative prediction setting, where the data distribution may shift due to the deployed prediction model.

Collaborative Mean Estimation over Intermittently Connected Networks with Peer-To-Peer Privacy

no code implementations28 Feb 2023 Rajarshi Saha, Mohamed Seif, Michal Yemini, Andrea J. Goldsmith, H. Vincent Poor

This work considers the problem of Distributed Mean Estimation (DME) over networks with intermittent connectivity, where the goal is to learn a global statistic over the data samples localized across distributed nodes with the help of a central server.

Multi-Armed Bandits with Self-Information Rewards

no code implementations6 Sep 2022 Nir Weinberger, Michal Yemini

Additionally, under the assumption that the \textit{exact} alphabet size is unknown, and instead the player only knows a loose upper bound on it, a UCB-based algorithm is proposed, in which the player aims to reduce the regret caused by the unknown alphabet size in a finite time regime.

Multi-Armed Bandits

Semi-Decentralized Federated Learning with Collaborative Relaying

no code implementations23 May 2022 Michal Yemini, Rajarshi Saha, Emre Ozfatura, Deniz Gündüz, Andrea J. Goldsmith

We present a semi-decentralized federated learning algorithm wherein clients collaborate by relaying their neighbors' local updates to a central parameter server (PS).

Federated Learning

Restless Multi-Armed Bandits under Exogenous Global Markov Process

no code implementations28 Feb 2022 Tomer Gafni, Michal Yemini, Kobi Cohen

Motivated by recent studies on related RMAB settings, the regret is defined as the reward loss with respect to a player that knows the dynamics of the problem, and plays at each time t the arm that maximizes the expected immediate value.

Multi-Armed Bandits

Learning in Restless Bandits under Exogenous Global Markov Process

no code implementations17 Dec 2021 Tomer Gafni, Michal Yemini, Kobi Cohen

Motivated by recent studies on related RMAB settings, the regret is defined as the reward loss with respect to a player that knows the dynamics of the problem, and plays at each time $t$ the arm that maximizes the expected immediate value.

Cloud-Cluster Architecture for Detection in Intermittently Connected Sensor Networks

no code implementations3 Oct 2021 Michal Yemini, Stephanie Gil, Andrea J. Goldsmith

The connectivity of each sensor cluster is intermittent and depends on the available communication opportunities of the sensors to the fusion center.

Characterizing Trust and Resilience in Distributed Consensus for Cyberphysical Systems

no code implementations9 Mar 2021 Michal Yemini, Angelia Nedić, Andrea Goldsmith, Stephanie Gil

Further, the expected convergence rate decays exponentially with the quality of the trust observations between agents.

Optimization and Control Robotics Systems and Control Signal Processing Systems and Control

Interference Reduction in Virtual Cell Optimization

no code implementations30 Oct 2020 Michal Yemini, Elza Erkip, Andrea J. Goldsmith

Our numerical results show that our scheme decreases the number of users in the system whose rate falls below the guaranteed rate, set to $128$kbps, $256$kbps or $512$kbps, when compared with our previously proposed optimization methods.

The Restless Hidden Markov Bandit with Linear Rewards and Side Information

no code implementations22 Oct 2019 Michal Yemini, Amir Leshem, Anelia Somekh-Baruch

Furthermore, we assume structural side information where the decision maker knows in advance that there are two types of hidden states; one is common to all arms and evolves according to a Markovian distribution, and the other is unique to each arm and is distributed according to an i. i. d.

Cannot find the paper you are looking for? You can Submit a new open access paper.