1 code implementation • 25 May 2023 • Sungjin Im, Benjamin Moseley, Chenyang Xu, Ruilong Zhang
This elegant model studies the trade-off between acknowledgement cost and waiting experienced by requests.
no code implementations • 4 Nov 2022 • Aditya Bhaskara, Sreenivas Gollapudi, Sungjin Im, Kostas Kollias, Kamesh Munagala
For stochastic MAB, we also consider a stronger model where a probe reveals the reward values of the probed arms, and show that in this case, $k=3$ probes suffice to achieve parameter-independent constant regret, $O(n^2)$.
1 code implementation • 22 Oct 2022 • Michael Dinitz, Sungjin Im, Thomas Lavastida, Benjamin Moseley, Sergei Vassilvitskii
For each of these problems we introduce new algorithms that take advantage of multiple predictors, and prove bounds on the resulting performance.
1 code implementation • 9 Feb 2022 • Sungjin Im, Ravi Kumar, Aditya Petety, Manish Purohit
Learning-augmented algorithms -- in which, traditional algorithms are augmented with machine-learned predictions -- have emerged as a framework to go beyond worst-case analysis.
no code implementations • NeurIPS 2021 • Sungjin Im, Ravi Kumar, Mahshid Montazer Qaem, Manish Purohit
There has been recent interest in using machine-learned predictions to improve the worst-case guarantees of online algorithms.
no code implementations • NeurIPS 2021 • Michael Dinitz, Sungjin Im, Thomas Lavastida, Benjamin Moseley, Sergei Vassilvitskii
Second, once the duals are feasible, they may not be optimal, so we show that they can be used to quickly find an optimal solution.
no code implementations • 11 May 2020 • Mahmoud Abo-Khamis, Sungjin Im, Benjamin Moseley, Kirk Pruhs, Alireza Samadian
We consider gradient descent like algorithms for Support Vector Machine (SVM) training when the data is in relational form.
no code implementations • 24 Mar 2020 • Mahmoud Abo-Khamis, Sungjin Im, Benjamin Moseley, Kirk Pruhs, Alireza Samadian
In contrast, we show that the situation with two additive inequalities is quite different, by showing that it is NP-hard to evaluate simple aggregation queries, with two additive inequalities, with any bounded relative error.
no code implementations • 26 May 2019 • Ryan R. Curtin, Sungjin Im, Ben Moseley, Kirk Pruhs, Alireza Samadian
Our main result is that if the regularizer's effect does not become negligible as the norm of the hypothesis scales, and as the data scales, then a uniform sample of modest size is with high probability a coreset.