Search Results for author: Osman Yağan

Found 7 papers, 2 papers with code

Best-Arm Identification in Correlated Multi-Armed Bandits

no code implementations10 Sep 2021 Samarth Gupta, Gauri Joshi, Osman Yağan

In this paper we consider the problem of best-arm identification in multi-armed bandits in the fixed confidence setting, where the goal is to identify, with probability $1-\delta$ for some $\delta>0$, the arm with the highest mean reward in minimum possible samples from the set of arms $\mathcal{K}$.

Multi-Armed Bandits

Bandit-based Communication-Efficient Client Selection Strategies for Federated Learning

no code implementations14 Dec 2020 Yae Jee Cho, Samarth Gupta, Gauri Joshi, Osman Yağan

Due to communication constraints and intermittent client availability in federated learning, only a subset of clients can participate in each training round.

Fairness Federated Learning

Multi-Armed Bandits with Correlated Arms

2 code implementations6 Nov 2019 Samarth Gupta, Shreyas Chaudhari, Gauri Joshi, Osman Yağan

We consider a multi-armed bandit framework where the rewards obtained by pulling different arms are correlated.

Multi-Armed Bandits

A Unified Approach to Translate Classical Bandit Algorithms to the Structured Bandit Setting

no code implementations18 Oct 2018 Samarth Gupta, Shreyas Chaudhari, Subhojyoti Mukherjee, Gauri Joshi, Osman Yağan

We consider a finite-armed structured bandit problem in which mean rewards of different arms are known functions of a common hidden parameter $\theta^*$.

Thompson Sampling

On the Evolution of Spreading Processes in Complex Networks

no code implementations10 Oct 2018 Rashad Eletreby, Yong Zhuang, Kathleen M. Carley, Osman Yağan

In this paper, we investigate the evolution of spreading processes on complex networks with the aim of i) revealing the role of evolution on the threshold, probability, and final size of epidemics; and ii) exploring the interplay between the structural properties of the network and the dynamics of evolution.

Physics and Society Social and Information Networks

Correlated Multi-armed Bandits with a Latent Random Source

2 code implementations17 Aug 2018 Samarth Gupta, Gauri Joshi, Osman Yağan

As a result, there are regimes where our algorithm achieves a $\mathcal{O}(1)$ regret as opposed to the typical logarithmic regret scaling of multi-armed bandit algorithms.

Multi-Armed Bandits

Active Distribution Learning from Indirect Samples

no code implementations16 Aug 2018 Samarth Gupta, Gauri Joshi, Osman Yağan

At each time step, we choose one of the possible $K$ functions, $g_1, \ldots, g_K$ and observe the corresponding sample $g_i(X)$.

Privacy Preserving

Cannot find the paper you are looking for? You can Submit a new open access paper.