no code implementations • 10 Jan 2024 • Jessica Dai, Bailey Flanigan, Nika Haghtalab, Meena Jagadeesan, Chara Podimata
A common explanation for negative user impacts of content recommender systems is misalignment between the platform's objective and user welfare.
no code implementations • 21 Jul 2023 • Khashayar Khosravi, Renato Paes Leme, Chara Podimata, Apostolis Tsorvantzis
We present online learning algorithms for any possible value of the evolution rate $\lambda$ and we show the robustness of our results to various model misspecifications.
no code implementations • 13 Feb 2023 • Andreas Haupt, Dylan Hadfield-Menell, Chara Podimata
We model this user behavior as a two-stage noisy signalling game between the recommendation system and users: the recommendation system initially commits to a recommendation policy, presents content to the users during a cold start phase which the users choose to strategically consume in order to affect the types of content they will be recommended in a recommendation phase.
no code implementations • 25 Nov 2022 • Keegan Harris, Anish Agarwal, Chara Podimata, Zhiwei Steven Wu
Unlike this classical setting, we permit the units generating the panel data to be strategic, i. e. units may modify their pre-intervention outcomes in order to receive a more desirable intervention.
no code implementations • 15 Jun 2022 • Renato Paes Leme, Chara Podimata, Jon Schneider
We study the problem of contextual search in the adversarial noise model.
no code implementations • 1 Mar 2021 • Yahav Bechavod, Chara Podimata, Zhiwei Steven Wu, Juba Ziani
We initiate the study of the effects of non-transparency in decision rules on individuals' ability to improve in strategic learning settings.
no code implementations • 22 Jun 2020 • Chara Podimata, Aleksandrs Slivkins
We provide the first algorithm for adaptive discretization in the adversarial version, and derive instance-dependent regret bounds.
no code implementations • 26 Feb 2020 • Akshay Krishnamurthy, Thodoris Lykouris, Chara Podimata, Robert Schapire
We initiate the study of contextual search when some of the agents can behave in ways inconsistent with the underlying response model.
1 code implementation • ICML 2020 • Rupert Freeman, David M. Pennock, Chara Podimata, Jennifer Wortman Vaughan
First, we want the learning algorithm to be no-regret with respect to the best fixed expert in hindsight.
no code implementations • NeurIPS 2020 • Yiling Chen, Yang Liu, Chara Podimata
We address the question of repeatedly learning linear classifiers against agents who are strategically trying to game the deployed classifiers, and we use the Stackelberg regret to measure the performance of our algorithms.
no code implementations • 27 May 2018 • Yiling Chen, Chara Podimata, Ariel D. Procaccia, Nisarg Shah
This paper is part of an emerging line of work at the intersection of machine learning and mechanism design, which aims to avoid noise in training data by correctly aligning the incentives of data sources.
1 code implementation • 3 Nov 2017 • Zhe Feng, Chara Podimata, Vasilis Syrgkanis
We address online learning in complex auction settings, such as sponsored search auctions, where the value of the bidder is unknown to her, evolving in an arbitrary manner and observed only if the bidder wins an allocation.