no code implementations • 23 Aug 2016 • Sougata Chaudhuri, Ambuj Tewari
We consider two settings of online learning to rank where feedback is restricted to top ranked items.
no code implementations • NeurIPS 2016 • Sougata Chaudhuri, Ambuj Tewari
The implementation of their algorithm depends on two separate offline oracles and the distribution dependent regret additionally requires existence of a unique optimal action for the learner.
no code implementations • 6 Mar 2016 • Ambuj Tewari, Sougata Chaudhuri
We consider the generalization ability of algorithms for learning to rank at a query level, a problem also called subset ranking.
no code implementations • 6 Mar 2016 • Sougata Chaudhuri, Ambuj Tewari
We consider an online learning to rank setting in which, at each round, an oblivious adversary generates a list of $m$ documents, pertaining to a query, and the learner produces scores to rank the documents.
no code implementations • 6 Mar 2016 • Sougata Chaudhuri, Georgios Theocharous, Mohammad Ghavamzadeh
We study the problem of personalized advertisement recommendation (PAR), which consist of a user visiting a system (website) and the system displaying one of $K$ ads to the user.
no code implementations • 13 Nov 2015 • Bopeng Li, Sougata Chaudhuri, Ambuj Tewari
We consider the link prediction problem in a partially observed network, where the objective is to make predictions in the unobserved portion of the network.
no code implementations • 4 Aug 2015 • Sougata Chaudhuri, Ambuj Tewari
We show that, if there exists a perfect oracle ranker which can correctly rank each instance in an online sequence of ranking data, with some margin, the cumulative loss of perceptron algorithm on that sequence is bounded by a constant, irrespective of the length of the sequence.
no code implementations • 5 Oct 2014 • Sougata Chaudhuri, Ambuj Tewari
We consider a setting where a system learns to rank a fixed set of $m$ items.
no code implementations • 3 May 2014 • Ambuj Tewari, Sougata Chaudhuri
In binary classification and regression problems, it is well understood that Lipschitz continuity and smoothness of the loss function play key roles in governing generalization error bounds for empirical risk minimization algorithms.
no code implementations • 3 May 2014 • Sougata Chaudhuri, Ambuj Tewari
En route to developing the online algorithm and generalization bound, we propose a novel family of listwise large margin ranking surrogates.