no code implementations • 27 Jul 2023 • Rigel Galgana, Negin Golrezaei
Motivated by Carbon Emissions Trading Schemes, Treasury Auctions, Procurement Auctions, and Wholesale Electricity Markets, which all involve the auctioning of homogeneous multiple units, we consider the problem of learning how to bid in repeated multi-unit pay-as-bid auctions.
no code implementations • 21 Jun 2023 • Negin Golrezaei, Patrick Jaillet, Zijie Zhou
Specifically, in a C-Pareto optimal setting, we maximize the robust ratio while ensuring that the consistent ratio is at least C. Our proposed C-Pareto optimal algorithm is an adaptive protection level algorithm, which extends the classical fixed protection level algorithm introduced in Littlewood (2005) and Ball and Queyranne (2009).
no code implementations • 12 Jun 2023 • Qinyi Chen, Jason Cheuk Nam Liang, Negin Golrezaei, Djallel Bouneffouf
Today's online platforms heavily lean on algorithmic recommendations for bolstering user engagement and driving revenue.
no code implementations • 12 Jun 2023 • Fransisca Susan, Negin Golrezaei, Okke Schrijvers
Our strategy maximizes the expected total utility across auctions while satisfying the advertiser's budget constraints in expectation.
no code implementations • NeurIPS 2023 • Simina Brânzei, Mahsa Derakhshan, Negin Golrezaei, Yanjun Han
We analyze the properties of this auction in both the offline and online settings.
no code implementations • 3 Feb 2023 • Yuan Deng, Negin Golrezaei, Patrick Jaillet, Jason Cheuk Nam Liang, Vahab Mirrokni
In light of this finding, under a bandit feedback setting that mimics real-world scenarios where advertisers have limited information on ad auctions in each channels and how channels procure ads, we present an efficient learning algorithm that produces per-channel budgets whose resulting conversion approximates that of the global optimal problem.
no code implementations • NeurIPS 2023 • Qinyi Chen, Negin Golrezaei, Djallel Bouneffouf
Traditional multi-armed bandit (MAB) frameworks, predominantly examined under stochastic or adversarial settings, often overlook the temporal dynamics inherent in many real-world applications such as recommendation systems and online advertising.
no code implementations • 5 Aug 2022 • Fransisca Susan, Negin Golrezaei, Ehsan Emamjomeh-Zadeh, David Kempe
To overcome the identifiability problem, we introduce a directed acyclic graph (DAG) representation of the choice model.
no code implementations • 18 Feb 2021 • Rad Niazadeh, Negin Golrezaei, Joshua Wang, Fransisca Susan, Ashwinkumar Badanidiyuru
We leverage this notion to transform greedy robust offline algorithms into a $O(T^{2/3})$ (approximate) regret in the bandit setting.
no code implementations • 10 Sep 2020 • Negin Golrezaei, Vahideh Manshadi, Jon Schneider, Shreyas Sekar
We first show that existing learning algorithms---that are optimal in the absence of fake users---may converge to highly sub-optimal rankings under manipulation by fake users.
1 code implementation • 14 Jul 2020 • Bart P. G. Van Parys, Negin Golrezaei
We propose a novel learning algorithm that we call "DUSA" whose regret matches the information-theoretic regret lower bound up to a constant factor and can handle a wide range of structural information.
no code implementations • NeurIPS 2019 • Negin Golrezaei, Adel Javanmard, Vahab Mirrokni
Motivated by pricing in ad exchange markets, we consider the problem of robust learning of reserve prices against strategic buyers in repeated contextual second-price auctions.
no code implementations • 8 Nov 2019 • Negin Golrezaei, Patrick Jaillet, Jason Cheuk Nam Liang
We show that this design allows the seller to control the number of periods in which buyers significantly corrupt their bids.
no code implementations • NeurIPS 2019 • Santiago Balseiro, Negin Golrezaei, Mohammad Mahdian, Vahab Mirrokni, Jon Schneider
We consider the variant of this problem where in addition to receiving the reward $r_{i, t}(c)$, the learner also learns the values of $r_{i, t}(c')$ for some other contexts $c'$ in set $\mathcal{O}_i(c)$; i. e., the rewards that would have been achieved by performing that action under different contexts $c'\in \mathcal{O}_i(c)$.