Search Results for author: Nima Akbarzadeh

Found 5 papers, 0 papers with code

Approximate information state based convergence analysis of recurrent Q-learning

no code implementations9 Jun 2023 Erfan Seyedsalehi, Nima Akbarzadeh, Amit Sinha, Aditya Mahajan

In spite of the large literature on reinforcement learning (RL) algorithms for partially observable Markov decision processes (POMDPs), a complete theoretical understanding is still lacking.

Q-Learning Reinforcement Learning (RL)

On learning Whittle index policy for restless bandits with scalable regret

no code implementations7 Feb 2022 Nima Akbarzadeh, Aditya Mahajan

In particular, we consider a restless bandit model, and propose a Thompson-sampling based learning algorithm which is tuned to the underlying structure of the model.

Scheduling Thompson Sampling

Two families of indexable partially observable restless bandits and Whittle index computation

no code implementations12 Apr 2021 Nima Akbarzadeh, Aditya Mahajan

We consider the restless bandits with general state space under partial observability with two observational models: first, the state of each bandit is not observable at all, and second, the state of each bandit is observable only if it is chosen.

Conditions for indexability of restless bandits and an O(K^3) algorithm to compute Whittle index

no code implementations13 Aug 2020 Nima Akbarzadeh, Aditya Mahajan

We then revisit a previously proposed algorithm called adaptive greedy algorithm which is known to compute the Whittle index for a subclass of restless bandits.

Gambler's Ruin Bandit Problem

no code implementations21 May 2016 Nima Akbarzadeh, Cem Tekin

In the GRBP, the learner proceeds in a sequence of rounds, where each round is a Markov Decision Process (MDP) with two actions (arms): a continuation action that moves the learner randomly over the state space around the current state; and a terminal action that moves the learner directly into one of the two terminal states (goal and dead-end state).

Cannot find the paper you are looking for? You can Submit a new open access paper.