Search Results for author: Joshua R. Wang

Found 5 papers, 0 papers with code

Contracting with a Learning Agent

no code implementations29 Jan 2024 Guru Guruganesh, Yoav Kolumbus, Jon Schneider, Inbal Talgam-Cohen, Emmanouil-Vasileios Vlatakis-Gkaragkounis, Joshua R. Wang, S. Matthew Weinberg

We initiate the study of repeated contracts with a learning agent, focusing on agents who achieve no-regret outcomes.

Recursive Sketches for Modular Deep Learning

no code implementations29 May 2019 Badih Ghazi, Rina Panigrahy, Joshua R. Wang

The sketch summarizes essential information about the inputs and outputs of the network and can be used to quickly identify key components and summary statistics of the inputs.

On the Computational Power of Online Gradient Descent

no code implementations3 Jul 2018 Vaggos Chatziafratis, Tim Roughgarden, Joshua R. Wang

We prove that the evolution of weight vectors in online gradient descent can encode arbitrary polynomial-space computations, even in very simple learning settings.

An Optimal Algorithm for Online Unconstrained Submodular Maximization

no code implementations8 Jun 2018 Tim Roughgarden, Joshua R. Wang

The goal is to design a computationally efficient online algorithm, which chooses a subset of $[n]$ at each time step as a function only of the past, such that the accumulated value of the chosen subsets is as close as possible to the maximum total value of a fixed subset in hindsight.

Optimal Algorithms for Continuous Non-monotone Submodular and DR-Submodular Maximization

no code implementations NeurIPS 2018 Rad Niazadeh, Tim Roughgarden, Joshua R. Wang

Our main result is the first $\frac{1}{2}$-approximation algorithm for continuous submodular function maximization; this approximation factor of $\frac{1}{2}$ is the best possible for algorithms that only query the objective function at polynomially many points.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.