no code implementations • 29 Jan 2024 • Guru Guruganesh, Yoav Kolumbus, Jon Schneider, Inbal Talgam-Cohen, Emmanouil-Vasileios Vlatakis-Gkaragkounis, Joshua R. Wang, S. Matthew Weinberg
We initiate the study of repeated contracts with a learning agent, focusing on agents who achieve no-regret outcomes.
no code implementations • 29 May 2019 • Badih Ghazi, Rina Panigrahy, Joshua R. Wang
The sketch summarizes essential information about the inputs and outputs of the network and can be used to quickly identify key components and summary statistics of the inputs.
no code implementations • 3 Jul 2018 • Vaggos Chatziafratis, Tim Roughgarden, Joshua R. Wang
We prove that the evolution of weight vectors in online gradient descent can encode arbitrary polynomial-space computations, even in very simple learning settings.
no code implementations • 8 Jun 2018 • Tim Roughgarden, Joshua R. Wang
The goal is to design a computationally efficient online algorithm, which chooses a subset of $[n]$ at each time step as a function only of the past, such that the accumulated value of the chosen subsets is as close as possible to the maximum total value of a fixed subset in hindsight.
no code implementations • NeurIPS 2018 • Rad Niazadeh, Tim Roughgarden, Joshua R. Wang
Our main result is the first $\frac{1}{2}$-approximation algorithm for continuous submodular function maximization; this approximation factor of $\frac{1}{2}$ is the best possible for algorithms that only query the objective function at polynomially many points.