no code implementations • ICML 2020 • Yuan Deng, Sébastien Lahaie, Vahab Mirrokni
Motivated by the repeated sale of online ads via auctions, optimal pricing in repeated auctions has attracted a large body of research.
no code implementations • 11 Apr 2022 • Vincent Cohen-Addad, Hossein Esfandiari, Vahab Mirrokni, Shyam Narayanan
Motivated by data analysis and machine learning applications, we consider the popular high-dimensional Euclidean $k$-median and $k$-means problems.
no code implementations • 12 Feb 2022 • Santiago R. Balseiro, Haihao Lu, Vahab Mirrokni, Balasubramanian Sivan
We study a family of first-order methods with momentum based on mirror descent for online convex optimization, which we dub online mirror descent with momentum (OMDM).
no code implementations • NeurIPS 2021 • Nick Doudchenko, Khashayar Khosravi, Jean Pouget-Abadie, Sebastien Lahaie, Miles Lubin, Vahab Mirrokni, Jann Spiess, Guido Imbens
We investigate the optimal design of experimental studies that have pre-treatment outcome data available.
no code implementations • 22 Oct 2021 • Hossein Esfandiari, Vahab Mirrokni, Shyam Narayanan
In this work, we study high-dimensional mean estimation under user-level differential privacy, and attempt to design an $(\epsilon,\delta)$-differentially private mechanism using as few users as possible.
no code implementations • 5 Oct 2021 • Hossein Esfandiari, Vahab Mirrokni, Umar Syed, Sergei Vassilvitskii
We present new mechanisms for \emph{label differential privacy}, a relaxation of differentially private machine learning that only protects the privacy of the labels in the training set.
1 code implementation • 27 Jul 2021 • Jessica Shi, Laxman Dhulipala, David Eisenstat, Jakub Łącki, Vahab Mirrokni
Our empirical evaluation shows that this framework improves the state-of-the-art trade-offs between speed and quality of scalable community detection.
no code implementations • 1 Jul 2021 • Hossein Esfandiari, Vahab Mirrokni, Shyam Narayanan
Next, we study the $k$-means problem in this context and provide an $O(k \log k)$-approximation algorithm for explainable $k$-means, improving over the $O(k^2)$ bound of Dasgupta et al. and the $O(d k \log k)$ bound of \cite{laber2021explainable}.
no code implementations • 10 Jun 2021 • Laxman Dhulipala, David Eisenstat, Jakub Łącki, Vahab Mirrokni, Jessica Shi
For this variant, this is the first exact algorithm that runs in subquadratic time, as long as $m=n^{2-\epsilon}$ for some constant $\epsilon > 0$.
no code implementations • NeurIPS 2021 • Amin Karbasi, Vahab Mirrokni, Mohammad Shadravan
How can we make use of information parallelism in online decision making problems while efficiently balancing the exploration-exploitation trade-off?
no code implementations • 25 Feb 2021 • Quanquan Gu, Amin Karbasi, Khashayar Khosravi, Vahab Mirrokni, Dongruo Zhou
In many sequential decision-making problems, the individuals are split into several batches and the decision-maker is only allowed to change her policy at the end of batches.
no code implementations • NeurIPS 2020 • Alessandro Epasto, Mohammad Mahdian, Jieming Mao, Vahab Mirrokni, Lijie Ren
But at the same time, more noise might need to be added to the algorithm in order to keep the algorithm differentially private and this might hurt the algorithm’s performance.
no code implementations • NeurIPS 2020 • Alessandro Epasto, Mohammad Mahdian, Vahab Mirrokni, Emmanouil Zampetakis
A soft-max function has two main efficiency measures: (1) approximation - which corresponds to how well it approximates the maximum function, (2) smoothness - which shows how sensitive it is to changes of its input.
1 code implementation • NeurIPS 2020 • Joey Huchette, Haihao Lu, Hossein Esfandiari, Vahab Mirrokni
Moreover, we show that this MIP formulation is ideal (i. e. the strongest possible formulation) for the revenue function of a single impression.
no code implementations • 18 Nov 2020 • Santiago Balseiro, Haihao Lu, Vahab Mirrokni
In this paper, we consider a data-driven setting in which the reward and resource consumption of each request are generated using an input model that is unknown to the decision maker.
no code implementations • 22 Oct 2020 • Alessandro Epasto, Mohammad Mahdian, Vahab Mirrokni, Manolis Zampetakis
A soft-max function has two main efficiency measures: (1) approximation - which corresponds to how well it approximates the maximum function, (2) smoothness - which shows how sensitive it is to changes of its input.
no code implementations • 20 Oct 2020 • Benjamin Grimmer, Haihao Lu, Pratik Worah, Vahab Mirrokni
Unlike nonconvex optimization, where gradient descent is guaranteed to converge to a local optimizer, algorithms for nonconvex-nonconcave minimax optimization can have topologically different solution paths: sometimes converging to a solution, sometimes never converging and instead following a limit cycle, and sometimes diverging.
no code implementations • 1 Jul 2020 • Santiago Balseiro, Haihao Lu, Vahab Mirrokni
In this paper, we introduce the \emph{regularized online allocation problem}, a variant that includes a non-linear regularizer acting on the total resource consumption.
no code implementations • 15 Jun 2020 • Benjamin Grimmer, Haihao Lu, Pratik Worah, Vahab Mirrokni
Critically, we show this envelope not only smooths the objective but can convexify and concavify it based on the level of interaction present between the minimizing and maximizing variables.
no code implementations • ICML 2020 • Thodoris Lykouris, Vahab Mirrokni, Renato Paes Leme
We study "adversarial scaling", a multi-armed bandit model where rewards have a stochastic and an adversarial component.
no code implementations • NeurIPS 2019 • Negin Golrezaei, Adel Javanmard, Vahab Mirrokni
Motivated by pricing in ad exchange markets, we consider the problem of robust learning of reserve prices against strategic buyers in repeated contextual second-price auctions.
no code implementations • ICML 2020 • Haihao Lu, Santiago Balseiro, Vahab Mirrokni
The revenue function and resource consumption of each request are drawn independently and at random from a probability distribution that is unknown to the decision maker.
Optimization and Control
1 code implementation • 20 Feb 2020 • Joey Huchette, Haihao Lu, Hossein Esfandiari, Vahab Mirrokni
Moreover, we show that this MIP formulation is ideal (i. e. the strongest possible formulation) for the revenue function of a single impression.
no code implementations • NeurIPS 2019 • Yuan Deng, Sébastien Lahaie, Vahab Mirrokni
Dynamic mechanisms offer powerful techniques to improve on both revenue and efficiency by linking sequential auctions using state information, but these techniques rely on exact distributional information of the buyers’ valuations (present and future), which limits their use in learning settings.
no code implementations • NeurIPS 2019 • Jean Pouget-Abadie, Kevin Aydin, Warren Schudy, Kay Brodersen, Vahab Mirrokni
This paper introduces a novel clustering objective and a corresponding algorithm that partitions a bipartite graph so as to maximize the statistical power of a bipartite experiment on that graph.
no code implementations • 9 Nov 2019 • Hossein Esfandiari, Amin Karbasi, Vahab Mirrokni
We propose an efficient semi adaptive policy that with $O(\log n \times\log k)$ adaptive rounds of observations can achieve an almost tight $1-1/e-\epsilon$ approximation guarantee with respect to an optimal policy that carries out $k$ actions in a fully sequential manner.
no code implementations • 11 Oct 2019 • Hossein Esfandiari, Amin Karbasi, Abbas Mehrabian, Vahab Mirrokni
We present simple and efficient algorithms for the batched stochastic multi-armed bandit and batched stochastic linear bandit problems.
1 code implementation • 20 Mar 2019 • Haihao Lu, Sai Praneeth Karimireddy, Natalia Ponomareva, Vahab Mirrokni
This is the first GBM type of algorithm with theoretically-justified accelerated convergence rate.
1 code implementation • 4 Oct 2018 • Shuaiwen Wang, Wenda Zhou, Arian Maleki, Haihao Lu, Vahab Mirrokni
$\mathcal{C} \subset \mathbb{R}^{p}$ is a closed convex set.
no code implementations • NeurIPS 2019 • Santiago Balseiro, Negin Golrezaei, Mohammad Mahdian, Vahab Mirrokni, Jon Schneider
We consider the variant of this problem where in addition to receiving the reward $r_{i, t}(c)$, the learner also learns the values of $r_{i, t}(c')$ for some other contexts $c'$ in set $\mathcal{O}_i(c)$; i. e., the rewards that would have been achieved by performing that action under different contexts $c'\in \mathcal{O}_i(c)$.
no code implementations • ICML 2018 • Hossein Esfandiari, Silvio Lattanzi, Vahab Mirrokni
The $k$-core decomposition is a fundamental primitive in many machine learning and data mining applications.
2 code implementations • ICML 2018 • Shuaiwen Wang, Wenda Zhou, Haihao Lu, Arian Maleki, Vahab Mirrokni
Consider the following class of learning schemes: $$\hat{\boldsymbol{\beta}} := \arg\min_{\boldsymbol{\beta}}\;\sum_{j=1}^n \ell(\boldsymbol{x}_j^\top\boldsymbol{\beta}; y_j) + \lambda R(\boldsymbol{\beta}),\qquad\qquad (1) $$ where $\boldsymbol{x}_i \in \mathbb{R}^p$ and $y_i \in \mathbb{R}$ denote the $i^{\text{th}}$ feature and response variable respectively.
no code implementations • ICML 2018 • Shipra Agrawal, Morteza Zadimoghaddam, Vahab Mirrokni
Inspired by many applications of bipartite matching in online advertising and machine learning, we study a simple and natural iterative proportional allocation algorithm: Maintain a priority score $\priority_a$ for each node $a\in \mathds{A}$ on one side of the bipartition, initialized as $\priority_a=1$.
no code implementations • ICML 2018 • Haihao Lu, Robert Freund, Vahab Mirrokni
On the empirical side, while both AGCD and ASCD outperform Accelerated Randomized Coordinate Descent on most instances in our numerical experiments, we note that AGCD significantly outperforms the other two methods in our experiments, in spite of a lack of theoretical guarantees for this method.
no code implementations • 25 Mar 2018 • Thodoris Lykouris, Vahab Mirrokni, Renato Paes Leme
We introduce a new model of stochastic bandits with adversarial corruptions which aims to capture settings where most of the input follows a stochastic pattern but some fraction of it can be adversarially changed to trick the algorithm, e. g., click fraud, fake reviews and email spam.
no code implementations • NeurIPS 2017 • Santiago Balseiro, Max Lin, Vahab Mirrokni, Renato Leme, Iiis Song Zuo
In this paper, we characterize the optimal revenue sharing scheme that satisfies both constraints in expectation.
1 code implementation • NeurIPS 2017 • Mohammadhossein Bateni, Soheil Behnezhad, Mahsa Derakhshan, Mohammadtaghi Hajiaghayi, Raimondas Kiveris, Silvio Lattanzi, Vahab Mirrokni
In particular, we propose affinity, a novel hierarchical clustering based on Boruvka's MST algorithm.
no code implementations • 24 May 2017 • Nicholas Harvey, Vahab Mirrokni, David Karger, Virginia Savova, Leonid Peshkin
This paper formulates a novel problem on graphs: find the minimal subset of edges in a fully connected graph, such that the resulting graph contains all spanning trees for a set of specifed sub-graphs.
no code implementations • NeurIPS 2016 • Hossein Esfandiari, Nitish Korula, Vahab Mirrokni
In particular, in online advertising it is fairly common to optimize multiple metrics, such as clicks, conversions, and impressions, as well as other metrics which may be largely uncorrelated such as ‘share of voice’, and ‘buyer surplus’.
no code implementations • NeurIPS 2016 • Aditya Bhaskara, Mehrdad Ghadiri, Vahab Mirrokni, Ola Svensson
We first study the approximation quality of the algorithm by comparing with the LP objective.
1 code implementation • 3 Aug 2016 • Vahab Mirrokni, Mikkel Thorup, Morteza Zadimoghaddam
Designing algorithms for balanced allocation of clients to servers in dynamic settings is a challenging problem for a variety of reasons.
Data Structures and Algorithms
no code implementations • ICML 2017 • Vahab Mirrokni, Renato Paes Leme, Adrian Vladu, Sam Chiu-wai Wong
We give a deterministic nearly-linear time algorithm for approximating any point inside a convex polytope with a sparse convex combination of the polytope's vertices.
no code implementations • NeurIPS 2014 • Mohammadhossein Bateni, Aditya Bhaskara, Silvio Lattanzi, Vahab Mirrokni
Large-scale clustering of data points in metric spaces is an important problem in mining big data sets.
no code implementations • 30 Apr 2013 • Zeyuan Allen Zhu, Silvio Lattanzi, Vahab Mirrokni
We also prove that our analysis is tight, and perform empirical evaluation to support our theory on both synthetic and real data.