no code implementations • 11 Jan 2024 • Jesse Geneson, Linus Tang
In particular, we sharpen an upper bound for delayed ambiguous reinforcement learning by a factor of 2 and an upper bound for learning compositions of families of functions by a factor of 2. 41.
no code implementations • 4 Jan 2023 • Jesse Geneson, Ethan Zhou
We also obtain sharp bounds on learning $\mathcal F_{\infty, d}$ for $p < d$ when the number of trials is bounded.
no code implementations • 3 Sep 2022 • Raymond Feng, Jesse Geneson, Andrew Lee, Espen Slettnes
The only difference between the two models is that in the delayed, ambiguous model, the learner must answer each input before receiving the next input of the round, while the learner receives all $r$ inputs at once in the weak reinforcement model.
no code implementations • 30 May 2021 • Jesse Geneson
We investigate the generalization of the mistake-bound model to continuous real-valued single variable functions.
no code implementations • 18 Jan 2021 • Jesse Geneson
The proof of this result depended on the following lemma, which is false e. g. for all prime $p \ge 5$, $s = \mathbf{1}$ (the all $1$ vector), $t = \mathbf{2}$ (the all $2$ vector), and all $z$.
no code implementations • 27 Aug 2019 • Carina Curto, Jesse Geneson, Katherine Morrison
We also provide further evidence for the conjecture by showing that sparse graphs and graphs that are nearly cliques can never support stable fixed points.