no code implementations • 27 Feb 2023 • Mohammad Mohammadi, Jonathan Nöther, Debmalya Mandal, Adish Singla, Goran Radanovic
In this paper, we study targeted poisoning attacks in a two-agent setting where an attacker implicitly poisons the effective environment of one of the agents by modifying the policy of its peer.
no code implementations • 7 Feb 2023 • Debmalya Mandal, Goran Radanovic, Jiarui Gan, Adish Singla, Rupak Majumdar
We show that minimizing regret with this new general discounting is equivalent to minimizing regret with uncertain episode lengths.
no code implementations • 26 Aug 2022 • Debmalya Mandal, Jiarui Gan
We consider the problem of minimizing regret with respect to the fair policies maximizing three different fair objectives -- minimum welfare, generalized Gini welfare, and Nash social welfare.
1 code implementation • 30 Jun 2022 • Debmalya Mandal, Stelios Triantafyllou, Goran Radanovic
We introduce the framework of performative reinforcement learning where the policy chosen by the learner affects the underlying reward and transition dynamics of the environment.
no code implementations • 18 Jan 2022 • Samuel Deng, Yilin Guo, Daniel Hsu, Debmalya Mandal
Prior works on learning linear representations for meta-learning assume that there is a common shared representation across different tasks, and do not consider the additional task-specific observable side information.
1 code implementation • 9 Sep 2021 • Debajyoti Kar, Mert Kosan, Debmalya Mandal, Sourav Medya, Arlei Silva, Palash Dey, Swagato Sanyal
Ensuring fairness in machine learning algorithms is a challenging and essential task.
no code implementations • 19 May 2021 • Hadi Hosseini, Debmalya Mandal, Nisarg Shah, Kevin Shi
A clever recent approach, \emph{surprisingly popular voting}, elicits additional information from the individuals, namely their \emph{prediction} of other individuals' votes, and provably recovers the ground truth even when experts are in minority.
no code implementations • 27 Feb 2021 • Debmalya Mandal, Sourav Medya, Brian Uzzi, Charu Aggarwal
Graph Neural Networks (GNNs), a generalization of deep neural networks on graph data have been widely used in various domains, ranging from drug discovery to recommender systems.
no code implementations • NeurIPS 2020 • Nicholas Bishop, Hau Chan, Debmalya Mandal, Long Tran-Thanh
On the other hand, when B_T is not known, we show that the dynamic approximate regret of RGA-META is at most O((K+\tilde{D})^{1/4}\tilde{B}^{1/2}T^{3/4}) where \tilde{B} is the maximal path variation budget within each batch of RGA-META (which is provably in order of o(\sqrt{T}).
2 code implementations • NeurIPS 2020 • Debmalya Mandal, Samuel Deng, Suman Jana, Jeannette M. Wing, Daniel Hsu
In this work, we develop classifiers that are fair not only with respect to the training distribution, but also for a class of distributions that are weighted perturbations of the training samples.
no code implementations • NeurIPS 2019 • Debmalya Mandal, Ariel D. Procaccia, Nisarg Shah, David Woodruff
We take an unorthodox view of voting by expanding the design space to include both the elicitation rule, whereby voters map their (cardinal) preferences to votes, and the aggregation rule, which transforms the reported votes into collective decisions.
1 code implementation • 12 Feb 2019 • Debmalya Mandal, David Parkes
We model the potential outcomes as a three-dimensional tensor of low rank, where the three dimensions correspond to the agents, time periods and the set of possible histories.
no code implementations • 6 Jul 2017 • Yang Liu, Goran Radanovic, Christos Dimitrakakis, Debmalya Mandal, David C. Parkes
In addition, we define the {\em fairness regret}, which corresponds to the degree to which an algorithm is not calibrated, where perfect calibration requires that the probability of selecting an arm is equal to the probability with which the arm has the best quality realization.