Search Results for author: Tim Roughgarden

Found 16 papers, 1 papers with code

Are You Smarter Than a Random Expert? The Robust Aggregation of Substitutable Signals

no code implementations4 Nov 2021 Eric Neyman, Tim Roughgarden

We show that by averaging the experts' forecasts and then \emph{extremizing} the average by moving it away from the prior by a constant factor, the aggregator's performance guarantee is substantially better than is possible without knowledge of the prior.

Smoothed Analysis with Adaptive Adversaries

no code implementations16 Feb 2021 Nika Haghtalab, Tim Roughgarden, Abhishek Shetty

-Online discrepancy minimization: We consider the online Koml\'os problem, where the input is generated from an adaptive sequence of $\sigma$-smooth and isotropic distributions on the $\ell_2$ unit ball.

Byzantine Generals in the Permissionless Setting

no code implementations18 Jan 2021 Andrew Lewis-Pye, Tim Roughgarden

What differentiates Bitcoin from these previously studied protocols is that it operates in a permissionless setting, i. e. it is a protocol for establishing consensus over an unknown network of participants that anybody can join, with as many identities as they like in any role.

Distributed Computing Distributed, Parallel, and Cluster Computing

Beyond the Worst-Case Analysis of Algorithms (Introduction)

no code implementations26 Jul 2020 Tim Roughgarden

One of the primary goals of the mathematical analysis of algorithms is to provide guidance about which algorithm is the "best" for solving a given computational problem.

Smoothed Analysis of Online and Differentially Private Learning

no code implementations NeurIPS 2020 Nika Haghtalab, Tim Roughgarden, Abhishek Shetty

Practical and pervasive needs for robustness and privacy in algorithms have inspired the design of online adversarial and differentially private learning algorithms.

On the Computational Power of Online Gradient Descent

no code implementations3 Jul 2018 Vaggos Chatziafratis, Tim Roughgarden, Joshua R. Wang

We prove that the evolution of weight vectors in online gradient descent can encode arbitrary polynomial-space computations, even in very simple learning settings.

An Optimal Algorithm for Online Unconstrained Submodular Maximization

no code implementations8 Jun 2018 Tim Roughgarden, Joshua R. Wang

The goal is to design a computationally efficient online algorithm, which chooses a subset of $[n]$ at each time step as a function only of the past, such that the accumulated value of the chosen subsets is as close as possible to the maximum total value of a fixed subset in hindsight.

Optimal Algorithms for Continuous Non-monotone Submodular and DR-Submodular Maximization

no code implementations NeurIPS 2018 Rad Niazadeh, Tim Roughgarden, Joshua R. Wang

Our main result is the first $\frac{1}{2}$-approximation algorithm for continuous submodular function maximization; this approximation factor of $\frac{1}{2}$ is the best possible for algorithms that only query the objective function at polynomially many points.

The idemetric property: when most distances are (almost) the same

1 code implementation30 Apr 2018 George Barmpalias, Neng Huang, Andrew Lewis-Pye, Angsheng Li, Xuechen Li, YiCheng Pan, Tim Roughgarden

We introduce the \emph{idemetric} property, which formalises the idea that most nodes in a graph have similar distances between them, and which turns out to be quite standard amongst small-world network models.

Social and Information Networks Discrete Mathematics

The Price of Anarchy in Auctions

no code implementations26 Jul 2016 Tim Roughgarden, Vasilis Syrgkanis, Eva Tardos

This survey outlines a general and modular theory for proving approximation guarantees for equilibria of auctions in complex settings.

Learning Simple Auctions

no code implementations11 Apr 2016 Jamie Morgenstern, Tim Roughgarden

We present a general framework for proving polynomial sample complexity bounds for the problem of learning from samples the best auction in a class of "simple" auctions.

On the Pseudo-Dimension of Nearly Optimal Auctions

no code implementations NeurIPS 2015 Jamie H. Morgenstern, Tim Roughgarden

This paper develops a general approach, rooted in statistical learning theory, to learning an approximately revenue-maximizing auction from data.

Learning Theory

A PAC Approach to Application-Specific Algorithm Selection

no code implementations23 Nov 2015 Rishi Gupta, Tim Roughgarden

While there is a large literature on empirical approaches to selecting the best algorithm for a given application domain, there has been surprisingly little theoretical analysis of the problem.

Learning Theory

Tight Error Bounds for Structured Prediction

no code implementations19 Sep 2014 Amir Globerson, Tim Roughgarden, David Sontag, Cafer Yildirim

We show that the prospects for achieving low expected Hamming error depend on the structure of the graph $G$ in interesting ways.

Structured Prediction

Privately Solving Linear Programs

no code implementations15 Feb 2014 Justin Hsu, Aaron Roth, Tim Roughgarden, Jonathan Ullman

In this paper, we initiate the systematic study of solving linear programs under differential privacy.

Marginals-to-Models Reducibility

no code implementations NeurIPS 2013 Tim Roughgarden, Michael Kearns

We consider a number of classical and new computational problems regarding marginal distributions, and inference in models specifying a full joint distribution.

Cannot find the paper you are looking for? You can Submit a new open access paper.