Search Results for author: Arindam Khan

Found 6 papers, 0 papers with code

Mitigating Disparity while Maximizing Reward: Tight Anytime Guarantee for Improving Bandits

no code implementations19 Aug 2022 Vishakha Patil, Vineet Nair, Ganesh Ghalme, Arindam Khan

We study the tension that arises between two seemingly conflicting objectives in the horizon-unaware setting: a) maximizing the cumulative reward at any time based on current rewards of the arms, and b) ensuring that arms with better long-term rewards get sufficient opportunities even if they initially have low rewards.

Fairness and Welfare Quantification for Regret in Multi-Armed Bandits

no code implementations27 May 2022 Siddharth Barman, Arindam Khan, Arnab Maiti, Ayush Sawarni

Since NSW is known to satisfy fairness axioms, our approach complements the utilitarian considerations of average (cumulative) regret, wherein the algorithm is evaluated via the arithmetic mean of its expected rewards.

Fairness Multi-Armed Bandits

Approximation Algorithms for ROUND-UFP and ROUND-SAP

no code implementations7 Feb 2022 Debajyoti Kar, Arindam Khan, Andreas Wiese

In ROUND-UFP, the goal is to find a packing of all tasks into a minimum number of copies (rounds) of the given path such that for each copy, the total demand of tasks on any edge does not exceed the capacity of the respective edge.

Approximation Algorithms for Generalized Multidimensional Knapsack

no code implementations11 Feb 2021 Arindam Khan, Eklavya Sharma, K. V. N. Sreenivas

The input is a set of rectangular items, each with an associated profit and $d$ nonnegative weights ($d$-dimensional vector), and a square knapsack.

Data Structures and Algorithms Computational Geometry

Streaming Algorithms for Stochastic Multi-armed Bandits

no code implementations9 Dec 2020 Arnab Maiti, Vishakha Patil, Arindam Khan

In this setting, the arms arrive in a stream, and the number of arms that can be stored in the memory at any time, is bounded.

Multi-Armed Bandits Open-Ended Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.