no code implementations • ICML 2020 • Prashanth L. A., Krishna Jagannathan, Ravi Kumar Kolla
We derive concentration bounds for CVaR estimates, considering separately the cases of light-tailed and heavy-tailed distributions.
no code implementations • 6 Aug 2018 • Ravi Kumar Kolla, Prashanth L. A., Sanjay P. Bhat, Krishna Jagannathan
In several real-world applications involving decision making under uncertainty, the traditional expected value objective may not be suitable, as it may be necessary to control losses in the case of a rare but extreme event.
no code implementations • 30 Nov 2016 • Ravi Kumar Kolla, Prashanth L. A., Aditya Gopalan, Krishna Jagannathan, Michael Fu, Steve Marcus
For the $K$-armed bandit setting, we derive an upper bound on the expected regret for our proposed algorithm, and then we prove a matching lower bound to establish the order-optimality of our algorithm.
no code implementations • 29 Feb 2016 • Ravi Kumar Kolla, Krishna Jagannathan, Aditya Gopalan
A key finding of this paper is that natural extensions of widely-studied single agent learning policies to the network setting need not perform well in terms of regret.