no code implementations • 25 Nov 2017 • Mark Braverman, Jieming Mao, Jon Schneider, S. Matthew Weinberg
- There exists a learning algorithm $\mathcal{A}$ such that if the buyer bids according to $\mathcal{A}$ then the optimal strategy for the seller is simply to post the Myerson reserve for $D$ every round.
no code implementations • 27 Jun 2017 • Mark Braverman, Jieming Mao, Jon Schneider, S. Matthew Weinberg
We study a strategic version of the multi-armed bandit problem, where each arm is an individual strategic agent and we, the principal, pull one arm each round.
no code implementations • 24 Jun 2015 • Mark Braverman, Ankit Garg, Tengyu Ma, Huy L. Nguyen, David P. Woodruff
We study the tradeoff between the statistical error and communication cost of distributed statistical estimation problems in high dimensions.
no code implementations • 12 Jun 2019 • Mark Braverman, Jieming Mao, Yuval Peres
When the comparisons are noiseless, we characterize how the optimal sample complexity depends on the number of rounds (up to a polylogarithmic factor for general $r$ and up to a constant factor for $r=1$ or 2).
no code implementations • ICML 2020 • Mark Braverman, Xinyi Chen, Sham M. Kakade, Karthik Narasimhan, Cyril Zhang, Yi Zhang
Building accurate language models that capture meaningful long-term dependencies is a core challenge in natural language processing.
no code implementations • 8 Sep 2019 • Mark Braverman, Gillat Kol, Shay Moran, Raghuvansh R. Saxena
For Convex Set Disjointness (and the equivalent task of distributed LP feasibility) we derive upper and lower bounds of $\tilde O(d^2\log n)$ and~$\Omega(d\log n)$.
no code implementations • 6 Nov 2019 • Mark Braverman, Elad Hazan, Max Simchowitz, Blake Woodworth
We investigate the computational complexity of several basic linear algebra primitives, including largest eigenvector computation and linear regression, in the computational model that allows access to the data via a matrix-vector product oracle.
no code implementations • 17 May 2020 • Mark Braverman, Sumegha Garg
Showing that if the objective is to maximize the efficiency of the classification process (defined as the accuracy of the outcome minus the sunk cost of the qualified players manipulating their features to gain a better outcome), then using randomized classifiers (that is, ones where the probability of a given feature vector to be accepted by the classifier is strictly between 0 and 1) is necessary.
no code implementations • 2 Mar 2021 • Mark Braverman, Jon Schneider, S. Matthew Weinberg
We show that under these constraints, the auctioneer can attain a constant fraction of the "sell the business" benchmark, but no more than $2/e$ of this benchmark.
Computer Science and Game Theory Theoretical Economics
no code implementations • 14 Jun 2021 • Mark Braverman
The framework is sufficiently general to be combined with any optimization algorithm that is based on local search.
no code implementations • 17 Aug 2021 • Olivier Bousquet, Mark Braverman, Klim Efremenko, Gillat Kol, Shay Moran
We derive an optimal $2$-approximation learning strategy for the Hypothesis Selection problem, outputting $q$ such that $\mathsf{TV}(p, q) \leq2 \cdot opt + \eps$, with a (nearly) optimal sample complexity of~$\tilde O(\log n/\epsilon^2)$.
no code implementations • 3 Oct 2022 • Nikunj Saunshi, Arushi Gupta, Mark Braverman, Sanjeev Arora
Influence functions estimate effect of individual data points on predictions of the model on test data and were adapted to deep learning in Koh and Liang [2017].
no code implementations • 16 Feb 2023 • Itai Ashlagi, Mark Braverman, Geng Zhao
In the model, each agent has a latent personal score for every agent on the other side of the market and her preferences follow a logit model based on these scores.