no code implementations • 12 Sep 2024 • Gagan Aggarwal, Ashwinkumar Badanidiyuru, Paul Dütting, Federico Fusco
In the stochastic setting, we design an efficient learning algorithm achieving a regret bound of $O(T^{3/4})$.
no code implementations • 20 Jan 2024 • Adel Javanmard, Lin Chen, Vahab Mirrokni, Ashwinkumar Badanidiyuru, Gang Fu
In this paper, we study two natural loss functions for learning from aggregate responses: bag-level loss and the instance-level loss.
no code implementations • NeurIPS 2023 • Ashwinkumar Badanidiyuru, Badih Ghazi, Pritish Kamath, Ravi Kumar, Ethan Leeman, Pasin Manurangsi, Avinash V Varadarajan, Chiyuan Zhang
We propose a new family of label randomizers for training regression models under the constraint of label differential privacy (DP).
no code implementations • 2 Jun 2022 • Ashwinkumar Badanidiyuru, Zhe Feng, Tianxi Li, Haifeng Xu
Incrementality, which is used to measure the causal effect of showing an ad to a potential customer (e. g. a user in an internet platform) versus not, is a central object for advertisers in online advertising platforms.
no code implementations • 7 Sep 2021 • Ashwinkumar Badanidiyuru, Zhe Feng, Guru Guruganesh
For binary feedback, when the noise distribution $\mathcal{F}$ is known, we propose a bidding algorithm, by using maximum likelihood estimation (MLE) method to achieve at most $\widetilde{O}(\sqrt{\log(d) T})$ regret.
no code implementations • 18 Feb 2021 • Rad Niazadeh, Negin Golrezaei, Joshua Wang, Fransisca Susan, Ashwinkumar Badanidiyuru
We leverage this notion to transform greedy robust offline algorithms into a $O(T^{2/3})$ (approximate) regret in the bandit setting.
no code implementations • 6 Jan 2021 • Ashwinkumar Badanidiyuru, Andrew Evdokimov, Vinodh Krishnan, Pan Li, Wynn Vonnegut, Jayden Wang
Predicting the expected value or number of post-click conversions (purchases or other events) is a key task in performance-based digital advertising.
no code implementations • NeurIPS 2020 • Ashwinkumar Badanidiyuru, Amin Karbasi, Ehsan Kazemi, Jan Vondrak
In this paper, we introduce a novel technique for constrained submodular maximization, inspired by barrier functions in continuous optimization.
no code implementations • NeurIPS 2015 • Baharan Mirzasoleiman, Amin Karbasi, Ashwinkumar Badanidiyuru, Andreas Krause
In this paper, we formalize this challenge as a submodular cover problem.
no code implementations • 28 Sep 2014 • Baharan Mirzasoleiman, Ashwinkumar Badanidiyuru, Amin Karbasi, Jan Vondrak, Andreas Krause
Is it possible to maximize a monotone submodular function faster than the widely used lazy greedy algorithm (also known as accelerated greedy), both in theory and practice?
no code implementations • 27 Feb 2014 • Ashwinkumar Badanidiyuru, John Langford, Aleksandrs Slivkins
We study contextual bandits with ancillary constraints on resources, which are common in real-world applications such as choosing ads or dynamic pricing of items.
no code implementations • 11 May 2013 • Ashwinkumar Badanidiyuru, Robert Kleinberg, Aleksandrs Slivkins
As one example of a concrete application, we consider the problem of dynamic posted pricing with limited supply and obtain the first algorithm whose regret, with respect to the optimal dynamic policy, is sublinear in the supply.