Search Results for author: Christina Fragouli

Found 9 papers, 0 papers with code

Differentially Private Stochastic Linear Bandits: (Almost) for Free

no code implementations7 Jul 2022 Osama A. Hanna, Antonious M. Girgis, Christina Fragouli, Suhas Diggavi

In the shuffled model, we also achieve regret of $\tilde{O}(\sqrt{T}+\frac{1}{\epsilon})$ %for small $\epsilon$ as in the central case, while the best previously known algorithm suffers a regret of $\tilde{O}(\frac{1}{\epsilon}{T^{3/5}})$.

Learning in Distributed Contextual Linear Bandits Without Sharing the Context

no code implementations8 Jun 2022 Osama A. Hanna, Lin F. Yang, Christina Fragouli

Contextual linear bandits is a rich and theoretically important model that has many practical applications.

Solving Multi-Arm Bandit Using a Few Bits of Communication

no code implementations11 Nov 2021 Osama A. Hanna, Lin F. Yang, Christina Fragouli

Existing works usually fail to address this issue and can become infeasible in certain applications.

Active Learning Quantization

A Reinforcement Learning Approach for Scheduling in mmWave Networks

no code implementations1 Aug 2021 Mine Gokce Dogan, Yahya H. Ezzeldin, Christina Fragouli, Addison W. Bohannon

We consider a source that wishes to communicate with a destination at a desired rate, over a mmWave network where links are subject to blockage and nodes to failure (e. g., in a hostile military environment).

reinforcement-learning

Quantizing data for distributed learning

no code implementations14 Dec 2020 Osama A. Hanna, Yahya H. Ezzeldin, Christina Fragouli, Suhas Diggavi

In this paper, we propose an alternate approach to learn from distributed data that quantizes data instead of gradients, and can support learning over applications where the size of gradient updates is prohibitive.

Quantization

Successive Refinement of Privacy

no code implementations24 May 2020 Antonious M. Girgis, Deepesh Data, Kamalika Chaudhuri, Christina Fragouli, Suhas Diggavi

This work examines a novel question: how much randomness is needed to achieve local differential privacy (LDP)?

Federated Recommendation System via Differential Privacy

no code implementations14 May 2020 Tan Li, Linqi Song, Christina Fragouli

In this paper, we are interested in what we term the federated private bandits framework, that combines differential privacy with multi-agent bandit learning.

Federated Learning

On Distributed Quantization for Classification

no code implementations1 Nov 2019 Osama A. Hanna, Yahya H. Ezzeldin, Tara Sadjadpour, Christina Fragouli, Suhas Diggavi

We consider the problem of distributed feature quantization, where the goal is to enable a pretrained classifier at a central node to carry out its classification on features that are gathered from distributed nodes through communication constrained channels.

Classification General Classification +1

Regret vs. Bandwidth Trade-off for Recommendation Systems

no code implementations15 Oct 2018 Linqi Song, Christina Fragouli, Devavrat Shah

We consider recommendation systems that need to operate under wireless bandwidth constraints, measured as number of broadcast transmissions, and demonstrate a (tight for some instances) tradeoff between regret and bandwidth for two scenarios: the case of multi-armed bandit with context, and the case where there is a latent structure in the message space that we can exploit to reduce the learning phase.

Recommendation Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.