You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • ICML 2020 • Yuan Deng, Sébastien Lahaie, Vahab Mirrokni

Motivated by the repeated sale of online ads via auctions, optimal pricing in repeated auctions has attracted a large body of research.

no code implementations • 7 Aug 2023 • Laxman Dhulipala, Jason Lee, Jakub Łącki, Vahab Mirrokni

Our algorithm is based on a new approach to computing $(1+\epsilon)$-approximate HAC, which is a novel combination of the nearest-neighbor chain algorithm and the notion of $(1+\epsilon)$-approximate HAC.

no code implementations • 2 Aug 2023 • Adel Javanmard, Vahab Mirrokni, Jean Pouget-Abadie

Estimating causal effects from randomized experiments is only feasible if participants agree to reveal their potentially sensitive responses.

no code implementations • 1 Jun 2023 • Evan Munro, David Jones, Jennifer Brennan, Roland Nelet, Vahab Mirrokni, Jean Pouget-Abadie

In online platforms, the impact of a treatment on an observed outcome may change over time as 1) users learn about the intervention, and 2) the system personalization, such as individualized recommendations, change over time.

no code implementations • 25 May 2023 • Yangsibo Huang, Haotian Jiang, Daogao Liu, Mohammad Mahdian, Jieming Mao, Vahab Mirrokni

In this paper, we study the setting in which data owners train machine learning models collaboratively under a privacy notion called joint differential privacy [Kearns et al., 2018].

no code implementations • 16 May 2023 • Lin Chen, Gang Fu, Amin Karbasi, Vahab Mirrokni

Our method is based on the observation that the sum of the gradients of the loss function on individual data examples in a curated bag can be computed from the aggregate label without the need for individual labels.

no code implementations • 23 Apr 2023 • Vasileios Charisopoulos, Hossein Esfandiari, Vahab Mirrokni

In this paper, we study the stochastic linear bandit problem under the additional requirements of differential privacy, robustness and batched observations.

2 code implementations • 12 Apr 2023 • CJ Carey, Travis Dick, Alessandro Epasto, Adel Javanmard, Josh Karlin, Shankar Kumar, Andres Munoz Medina, Vahab Mirrokni, Gabriel Henrique Nunes, Sergei Vassilvitskii, Peilin Zhong

In this work, we present a new theoretical framework to measure re-identification risk in such user representations.

1 code implementation • 27 Mar 2023 • Matthew Fahrbach, Adel Javanmard, Vahab Mirrokni, Pratik Worah

We design learning rate schedules that minimize regret for SGD-based online learning in the presence of a changing data distribution.

no code implementations • 20 Feb 2023 • Hossein Esfandiari, Amin Karbasi, Vahab Mirrokni, Grigoris Velegkas, Felix Zhou

We design replicable algorithms in the context of statistical clustering under the recently introduced notion of replicability from Impagliazzo et al. [2022].

no code implementations • 8 Feb 2023 • Mehrdad Ghadiri, Matthew Fahrbach, Gang Fu, Vahab Mirrokni

This work studies the combinatorial optimization problem of finding an optimal core tensor shape, also called multilinear rank, for a size-constrained Tucker decomposition.

no code implementations • 3 Feb 2023 • Santiago Balseiro, Rachitesh Kumar, Vahab Mirrokni, Balasubramanian Sivan, Di Wang

Given the inherent non-stationarity in an advertiser's value and also competing advertisers' values over time, a commonly used approach is to learn a target expenditure plan that specifies a target spend as a function of time, and then run a controller that tracks this plan.

no code implementations • 3 Feb 2023 • Yuan Deng, Negin Golrezaei, Patrick Jaillet, Jason Cheuk Nam Liang, Vahab Mirrokni

In light of this finding, under a bandit feedback setting that mimics real-world scenarios where advertisers have limited information on ad auctions in each channels and how channels procure ads, we present an efficient learning algorithm that produces per-channel budgets whose resulting conversion approximates that of the global optimal problem.

no code implementations • 31 Jan 2023 • Jacob Imola, Alessandro Epasto, Mohammad Mahdian, Vincent Cohen-Addad, Vahab Mirrokni

Then, we exhibit a polynomial-time approximation algorithm with $O(|V|^{2. 5}/ \epsilon)$-additive error, and an exponential-time algorithm that meets the lower bound.

no code implementations • 29 Dec 2022 • Jakub Łącki, Vahab Mirrokni, Christian Sohler

We study the problem of graph clustering under a broad class of objectives in which the quality of a cluster is defined based on the ratio between the number of edges in the cluster, and the total weight of vertices in the cluster.

no code implementations • 5 Dec 2022 • CJ Carey, Jonathan Halcrow, Rajesh Jayaram, Vahab Mirrokni, Warren Schudy, Peilin Zhong

We evaluate the performance of Stars for clustering and graph learning, and demonstrate 10~1000-fold improvements in pairwise similarity comparisons compared to different baselines, and 2~10-fold improvement in running time without quality loss.

no code implementations • 21 Oct 2022 • Hossein Esfandiari, Vahab Mirrokni, Jon Schneider

In this work, we present and study a new framework for online learning in systems with multiple users that provide user anonymity.

no code implementations • 4 Oct 2022 • Hossein Esfandiari, Alkis Kalavasis, Amin Karbasi, Andreas Krause, Vahab Mirrokni, Grigoris Velegkas

Similarly, for stochastic linear bandits (with finitely and infinitely many arms) we develop replicable policies that achieve the best-known problem-independent regret bounds with an optimal dependency on the replicability parameter.

1 code implementation • 29 Sep 2022 • Taisuke Yasuda, Mohammadhossein Bateni, Lin Chen, Matthew Fahrbach, Gang Fu, Vahab Mirrokni

Feature selection is the problem of selecting a subset of features for a machine learning model that maximizes model quality subject to a budget constraint.

no code implementations • 14 Jul 2022 • Alessandro Epasto, Vahab Mirrokni, Bryan Perozzi, Anton Tsitsulin, Peilin Zhong

Personalized PageRank (PPR) is a fundamental tool in unsupervised learning of graph representations such as node ranking, labeling, and graph embedding.

no code implementations • 13 Jul 2022 • Hossein Esfandiari, Alessandro Epasto, Vahab Mirrokni, Andres Munoz Medina, Sergei Vassilvitskii

When working with user data providing well-defined privacy guarantees is paramount.

1 code implementation • 7 Jul 2022 • Oleksandr Ferludin, Arno Eigenwillig, Martin Blais, Dustin Zelle, Jan Pfeifer, Alvaro Sanchez-Gonzalez, Wai Lok Sibon Li, Sami Abu-El-Haija, Peter Battaglia, Neslihan Bulut, Jonathan Halcrow, Filipe Miguel Gonçalves de Almeida, Pedro Gonnet, Liangze Jiang, Parth Kothari, Silvio Lattanzi, André Linhares, Brandon Mayer, Vahab Mirrokni, John Palowitch, Mihir Paradkar, Jennifer She, Anton Tsitsulin, Kevin Villela, Lisa Wang, David Wong, Bryan Perozzi

TensorFlow-GNN (TF-GNN) is a scalable library for Graph Neural Networks in TensorFlow.

1 code implementation • 17 Jun 2022 • Vincent Cohen-Addad, Alessandro Epasto, Silvio Lattanzi, Vahab Mirrokni, Andres Munoz, David Saulpic, Chris Schwiegelshohn, Sergei Vassilvitskii

We study the private $k$-median and $k$-means clustering problem in $d$ dimensional Euclidean space.

1 code implementation • 20 May 2022 • Mehran Kazemi, Anton Tsitsulin, Hossein Esfandiari, Mohammadhossein Bateni, Deepak Ramachandran, Bryan Perozzi, Vahab Mirrokni

Representative Selection (RS) is the problem of finding a small subset of exemplars from a dataset that is representative of the dataset.

no code implementations • 11 Apr 2022 • Vincent Cohen-Addad, Hossein Esfandiari, Vahab Mirrokni, Shyam Narayanan

Motivated by data analysis and machine learning applications, we consider the popular high-dimensional Euclidean $k$-median and $k$-means problems.

no code implementations • 12 Feb 2022 • Santiago R. Balseiro, Haihao Lu, Vahab Mirrokni, Balasubramanian Sivan

This paper provides the first regret bounds on the performance of dual-based PI controllers for online allocation problems.

no code implementations • NeurIPS 2021 • Nick Doudchenko, Khashayar Khosravi, Jean Pouget-Abadie, Sebastien Lahaie, Miles Lubin, Vahab Mirrokni, Jann Spiess, Guido Imbens

We investigate the optimal design of experimental studies that have pre-treatment outcome data available.

no code implementations • 22 Oct 2021 • Hossein Esfandiari, Vahab Mirrokni, Shyam Narayanan

In particular, we provide a nearly optimal trade-off between the number of users and the number of samples per user required for private mean estimation, even when the number of users is as low as $O(\frac{1}{\varepsilon}\log\frac{1}{\delta})$.

no code implementations • 5 Oct 2021 • Hossein Esfandiari, Vahab Mirrokni, Umar Syed, Sergei Vassilvitskii

We present new mechanisms for \emph{label differential privacy}, a relaxation of differentially private machine learning that only protects the privacy of the labels in the training set.

1 code implementation • 27 Jul 2021 • Jessica Shi, Laxman Dhulipala, David Eisenstat, Jakub Łącki, Vahab Mirrokni

Our empirical evaluation shows that this framework improves the state-of-the-art trade-offs between speed and quality of scalable community detection.

no code implementations • 1 Jul 2021 • Hossein Esfandiari, Vahab Mirrokni, Shyam Narayanan

Next, we study the $k$-means problem in this context and provide an $O(k \log k)$-approximation algorithm for explainable $k$-means, improving over the $O(k^2)$ bound of Dasgupta et al. and the $O(d k \log k)$ bound of \cite{laber2021explainable}.

no code implementations • 10 Jun 2021 • Laxman Dhulipala, David Eisenstat, Jakub Łącki, Vahab Mirrokni, Jessica Shi

For this variant, this is the first exact algorithm that runs in subquadratic time, as long as $m=n^{2-\epsilon}$ for some constant $\epsilon > 0$.

no code implementations • NeurIPS 2021 • Amin Karbasi, Vahab Mirrokni, Mohammad Shadravan

How can we make use of information parallelism in online decision making problems while efficiently balancing the exploration-exploitation trade-off?

no code implementations • 25 Feb 2021 • Quanquan Gu, Amin Karbasi, Khashayar Khosravi, Vahab Mirrokni, Dongruo Zhou

In many sequential decision-making problems, the individuals are split into several batches and the decision-maker is only allowed to change her policy at the end of batches.

1 code implementation • NeurIPS 2020 • Joey Huchette, Haihao Lu, Hossein Esfandiari, Vahab Mirrokni

Moreover, we show that this MIP formulation is ideal (i. e. the strongest possible formulation) for the revenue function of a single impression.

no code implementations • NeurIPS 2020 • Alessandro Epasto, Mohammad Mahdian, Jieming Mao, Vahab Mirrokni, Lijie Ren

But at the same time, more noise might need to be added to the algorithm in order to keep the algorithm differentially private and this might hurt the algorithm’s performance.

no code implementations • NeurIPS 2020 • Alessandro Epasto, Mohammad Mahdian, Vahab Mirrokni, Emmanouil Zampetakis

A soft-max function has two main efficiency measures: (1) approximation - which corresponds to how well it approximates the maximum function, (2) smoothness - which shows how sensitive it is to changes of its input.

no code implementations • 18 Nov 2020 • Santiago Balseiro, Haihao Lu, Vahab Mirrokni

In this paper, we consider a data-driven setting in which the reward and resource consumption of each request are generated using an input model that is unknown to the decision maker.

no code implementations • 22 Oct 2020 • Alessandro Epasto, Mohammad Mahdian, Vahab Mirrokni, Manolis Zampetakis

A soft-max function has two main efficiency measures: (1) approximation - which corresponds to how well it approximates the maximum function, (2) smoothness - which shows how sensitive it is to changes of its input.

no code implementations • 20 Oct 2020 • Benjamin Grimmer, Haihao Lu, Pratik Worah, Vahab Mirrokni

Unlike nonconvex optimization, where gradient descent is guaranteed to converge to a local optimizer, algorithms for nonconvex-nonconcave minimax optimization can have topologically different solution paths: sometimes converging to a solution, sometimes never converging and instead following a limit cycle, and sometimes diverging.

no code implementations • 1 Jul 2020 • Santiago Balseiro, Haihao Lu, Vahab Mirrokni

In this paper, we introduce the \emph{regularized online allocation problem}, a variant that includes a non-linear regularizer acting on the total resource consumption.

no code implementations • 15 Jun 2020 • Benjamin Grimmer, Haihao Lu, Pratik Worah, Vahab Mirrokni

Critically, we show this envelope not only smooths the objective but can convexify and concavify it based on the level of interaction present between the minimizing and maximizing variables.

no code implementations • ICML 2020 • Thodoris Lykouris, Vahab Mirrokni, Renato Paes Leme

We study "adversarial scaling", a multi-armed bandit model where rewards have a stochastic and an adversarial component.

no code implementations • NeurIPS 2019 • Negin Golrezaei, Adel Javanmard, Vahab Mirrokni

Motivated by pricing in ad exchange markets, we consider the problem of robust learning of reserve prices against strategic buyers in repeated contextual second-price auctions.

no code implementations • ICML 2020 • Haihao Lu, Santiago Balseiro, Vahab Mirrokni

The revenue function and resource consumption of each request are drawn independently and at random from a probability distribution that is unknown to the decision maker.

Optimization and Control

1 code implementation • 20 Feb 2020 • Joey Huchette, Haihao Lu, Hossein Esfandiari, Vahab Mirrokni

Moreover, we show that this MIP formulation is ideal (i. e. the strongest possible formulation) for the revenue function of a single impression.

no code implementations • NeurIPS 2019 • Yuan Deng, Sébastien Lahaie, Vahab Mirrokni

Dynamic mechanisms offer powerful techniques to improve on both revenue and efficiency by linking sequential auctions using state information, but these techniques rely on exact distributional information of the buyers’ valuations (present and future), which limits their use in learning settings.

no code implementations • NeurIPS 2019 • Jean Pouget-Abadie, Kevin Aydin, Warren Schudy, Kay Brodersen, Vahab Mirrokni

This paper introduces a novel clustering objective and a corresponding algorithm that partitions a bipartite graph so as to maximize the statistical power of a bipartite experiment on that graph.

no code implementations • 9 Nov 2019 • Hossein Esfandiari, Amin Karbasi, Vahab Mirrokni

We propose an efficient semi adaptive policy that with $O(\log n \times\log k)$ adaptive rounds of observations can achieve an almost tight $1-1/e-\epsilon$ approximation guarantee with respect to an optimal policy that carries out $k$ actions in a fully sequential manner.

no code implementations • 11 Oct 2019 • Hossein Esfandiari, Amin Karbasi, Abbas Mehrabian, Vahab Mirrokni

We present simple and efficient algorithms for the batched stochastic multi-armed bandit and batched stochastic linear bandit problems.

1 code implementation • 20 Mar 2019 • Haihao Lu, Sai Praneeth Karimireddy, Natalia Ponomareva, Vahab Mirrokni

This is the first GBM type of algorithm with theoretically-justified accelerated convergence rate.

1 code implementation • 4 Oct 2018 • Shuaiwen Wang, Wenda Zhou, Arian Maleki, Haihao Lu, Vahab Mirrokni

$\mathcal{C} \subset \mathbb{R}^{p}$ is a closed convex set.

no code implementations • NeurIPS 2019 • Santiago Balseiro, Negin Golrezaei, Mohammad Mahdian, Vahab Mirrokni, Jon Schneider

We consider the variant of this problem where in addition to receiving the reward $r_{i, t}(c)$, the learner also learns the values of $r_{i, t}(c')$ for some other contexts $c'$ in set $\mathcal{O}_i(c)$; i. e., the rewards that would have been achieved by performing that action under different contexts $c'\in \mathcal{O}_i(c)$.

no code implementations • ICML 2018 • Hossein Esfandiari, Silvio Lattanzi, Vahab Mirrokni

The $k$-core decomposition is a fundamental primitive in many machine learning and data mining applications.

2 code implementations • ICML 2018 • Shuaiwen Wang, Wenda Zhou, Haihao Lu, Arian Maleki, Vahab Mirrokni

Consider the following class of learning schemes: $$\hat{\boldsymbol{\beta}} := \arg\min_{\boldsymbol{\beta}}\;\sum_{j=1}^n \ell(\boldsymbol{x}_j^\top\boldsymbol{\beta}; y_j) + \lambda R(\boldsymbol{\beta}),\qquad\qquad (1) $$ where $\boldsymbol{x}_i \in \mathbb{R}^p$ and $y_i \in \mathbb{R}$ denote the $i^{\text{th}}$ feature and response variable respectively.

no code implementations • ICML 2018 • Shipra Agrawal, Morteza Zadimoghaddam, Vahab Mirrokni

Inspired by many applications of bipartite matching in online advertising and machine learning, we study a simple and natural iterative proportional allocation algorithm: Maintain a priority score $\priority_a$ for each node $a\in \mathds{A}$ on one side of the bipartition, initialized as $\priority_a=1$.

no code implementations • ICML 2018 • Haihao Lu, Robert Freund, Vahab Mirrokni

On the empirical side, while both AGCD and ASCD outperform Accelerated Randomized Coordinate Descent on most instances in our numerical experiments, we note that AGCD significantly outperforms the other two methods in our experiments, in spite of a lack of theoretical guarantees for this method.

no code implementations • 25 Mar 2018 • Thodoris Lykouris, Vahab Mirrokni, Renato Paes Leme

We introduce a new model of stochastic bandits with adversarial corruptions which aims to capture settings where most of the input follows a stochastic pattern but some fraction of it can be adversarially changed to trick the algorithm, e. g., click fraud, fake reviews and email spam.

no code implementations • NeurIPS 2017 • Santiago Balseiro, Max Lin, Vahab Mirrokni, Renato Leme, Iiis Song Zuo

In this paper, we characterize the optimal revenue sharing scheme that satisfies both constraints in expectation.

1 code implementation • NeurIPS 2017 • Mohammadhossein Bateni, Soheil Behnezhad, Mahsa Derakhshan, Mohammadtaghi Hajiaghayi, Raimondas Kiveris, Silvio Lattanzi, Vahab Mirrokni

In particular, we propose affinity, a novel hierarchical clustering based on Boruvka's MST algorithm.

no code implementations • 24 May 2017 • Nicholas Harvey, Vahab Mirrokni, David Karger, Virginia Savova, Leonid Peshkin

This paper formulates a novel problem on graphs: find the minimal subset of edges in a fully connected graph, such that the resulting graph contains all spanning trees for a set of specifed sub-graphs.

no code implementations • NeurIPS 2016 • Aditya Bhaskara, Mehrdad Ghadiri, Vahab Mirrokni, Ola Svensson

We first study the approximation quality of the algorithm by comparing with the LP objective.

no code implementations • NeurIPS 2016 • Hossein Esfandiari, Nitish Korula, Vahab Mirrokni

In particular, in online advertising it is fairly common to optimize multiple metrics, such as clicks, conversions, and impressions, as well as other metrics which may be largely uncorrelated such as ‘share of voice’, and ‘buyer surplus’.

1 code implementation • 3 Aug 2016 • Vahab Mirrokni, Mikkel Thorup, Morteza Zadimoghaddam

Designing algorithms for balanced allocation of clients to servers in dynamic settings is a challenging problem for a variety of reasons.

Data Structures and Algorithms

no code implementations • ICML 2017 • Vahab Mirrokni, Renato Paes Leme, Adrian Vladu, Sam Chiu-wai Wong

We give a deterministic nearly-linear time algorithm for approximating any point inside a convex polytope with a sparse convex combination of the polytope's vertices.

no code implementations • NeurIPS 2014 • Mohammadhossein Bateni, Aditya Bhaskara, Silvio Lattanzi, Vahab Mirrokni

Large-scale clustering of data points in metric spaces is an important problem in mining big data sets.

no code implementations • 30 Apr 2013 • Zeyuan Allen Zhu, Silvio Lattanzi, Vahab Mirrokni

We also prove that our analysis is tight, and perform empirical evaluation to support our theory on both synthetic and real data.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.