Search Results for author: Mark Braverman

Found 13 papers, 0 papers with code

Selling to a No-Regret Buyer

no code implementations25 Nov 2017 Mark Braverman, Jieming Mao, Jon Schneider, S. Matthew Weinberg

- There exists a learning algorithm $\mathcal{A}$ such that if the buyer bids according to $\mathcal{A}$ then the optimal strategy for the seller is simply to post the Myerson reserve for $D$ every round.

Multi-armed Bandit Problems with Strategic Arms

no code implementations27 Jun 2017 Mark Braverman, Jieming Mao, Jon Schneider, S. Matthew Weinberg

We study a strategic version of the multi-armed bandit problem, where each arm is an individual strategic agent and we, the principal, pull one arm each round.

Communication Lower Bounds for Statistical Estimation Problems via a Distributed Data Processing Inequality

no code implementations24 Jun 2015 Mark Braverman, Ankit Garg, Tengyu Ma, Huy L. Nguyen, David P. Woodruff

We study the tradeoff between the statistical error and communication cost of distributed statistical estimation problems in high dimensions.

Sorted Top-k in Rounds

no code implementations12 Jun 2019 Mark Braverman, Jieming Mao, Yuval Peres

When the comparisons are noiseless, we characterize how the optimal sample complexity depends on the number of rounds (up to a polylogarithmic factor for general $r$ and up to a constant factor for $r=1$ or 2).

Calibration, Entropy Rates, and Memory in Language Models

no code implementations ICML 2020 Mark Braverman, Xinyi Chen, Sham M. Kakade, Karthik Narasimhan, Cyril Zhang, Yi Zhang

Building accurate language models that capture meaningful long-term dependencies is a core challenge in natural language processing.

Convex Set Disjointness, Distributed Learning of Halfspaces, and LP Feasibility

no code implementations8 Sep 2019 Mark Braverman, Gillat Kol, Shay Moran, Raghuvansh R. Saxena

For Convex Set Disjointness (and the equivalent task of distributed LP feasibility) we derive upper and lower bounds of $\tilde O(d^2\log n)$ and~$\Omega(d\log n)$.

Distributed Optimization LEMMA

The gradient complexity of linear regression

no code implementations6 Nov 2019 Mark Braverman, Elad Hazan, Max Simchowitz, Blake Woodworth

We investigate the computational complexity of several basic linear algebra primitives, including largest eigenvector computation and linear regression, in the computational model that allows access to the data via a matrix-vector product oracle.

regression

The Role of Randomness and Noise in Strategic Classification

no code implementations17 May 2020 Mark Braverman, Sumegha Garg

Showing that if the objective is to maximize the efficiency of the classification process (defined as the accuracy of the outcome minus the sunk cost of the qualified players manipulating their features to gain a better outcome), then using randomized classifiers (that is, ones where the probability of a given feature vector to be accepted by the classifier is strictly between 0 and 1) is necessary.

Classification Fairness +1

Prior-free Dynamic Mechanism Design With Limited Liability

no code implementations2 Mar 2021 Mark Braverman, Jon Schneider, S. Matthew Weinberg

We show that under these constraints, the auctioneer can attain a constant fraction of the "sell the business" benchmark, but no more than $2/e$ of this benchmark.

Computer Science and Game Theory Theoretical Economics

Optimization-friendly generic mechanisms without money

no code implementations14 Jun 2021 Mark Braverman

The framework is sufficiently general to be combined with any optimization algorithm that is based on local search.

Statistically Near-Optimal Hypothesis Selection

no code implementations17 Aug 2021 Olivier Bousquet, Mark Braverman, Klim Efremenko, Gillat Kol, Shay Moran

We derive an optimal $2$-approximation learning strategy for the Hypothesis Selection problem, outputting $q$ such that $\mathsf{TV}(p, q) \leq2 \cdot opt + \eps$, with a (nearly) optimal sample complexity of~$\tilde O(\log n/\epsilon^2)$.

PAC learning

Understanding Influence Functions and Datamodels via Harmonic Analysis

no code implementations3 Oct 2022 Nikunj Saunshi, Arushi Gupta, Mark Braverman, Sanjeev Arora

Influence functions estimate effect of individual data points on predictions of the model on test data and were adapted to deep learning in Koh and Liang [2017].

Data Poisoning

Welfare Distribution in Two-sided Random Matching Markets

no code implementations16 Feb 2023 Itai Ashlagi, Mark Braverman, Geng Zhao

In the model, each agent has a latent personal score for every agent on the other side of the market and her preferences follow a logit model based on these scores.

Vocal Bursts Valence Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.