no code implementations • NeurIPS 2023 • Eric Balkanski, Noemie Perivier, Clifford Stein, Hao-Ting Wei
We show that, when the prediction error is small, this framework gives improved competitive ratios for many different energy-efficient scheduling problems, including energy minimization with deadlines, while also maintaining a bounded competitive ratio regardless of the prediction error.
no code implementations • 2 May 2022 • Eric Balkanski, Tingting Ou, Clifford Stein, Hao-Ting Wei
In the context of scheduling, very recent work has leveraged machine-learned predictions to design algorithms that achieve improved approximation ratios in settings where the processing times of the jobs are initially unknown.
no code implementations • 21 Feb 2022 • Eric Balkanski, Oussama Hanguir, Shatian Wang
To the best of our knowledge, these are the first algorithms with poly$(n, m)$ query complexity for learning non-trivial families of hypergraphs that have a super-constant number of edges of super-constant size.
no code implementations • 23 Feb 2021 • Eric Balkanski, Sharon Qian, Yaron Singer
A major question is therefore how to measure the performance of an algorithm in comparison to an optimal solution on instances we encounter in practice.
no code implementations • NeurIPS 2020 • Ron Kupfer, Sharon Qian, Eric Balkanski, Yaron Singer
Both the upper and lower bounds are under the assumption that queries are only on feasible sets (i. e., of size at most k).
no code implementations • 22 Oct 2020 • Eric Balkanski, Harrison Chase, Kojin Oshiba, Alexander Rilee, Yaron Singer, Richard Wang
Nevertheless, we generalize SCAR to design attacks that fool state-of-the-art check processing systems using unnoticeable perturbations that lead to misclassification of deposit amounts.
2 code implementations • ICML 2020 • Adam Breuer, Eric Balkanski, Yaron Singer
Recent algorithms have comparable guarantees in terms of asymptotic worst case analysis, but their actual number of rounds and query complexity depend on very large constants and polynomials in terms of precision and confidence, making them impractical for large data sets.
no code implementations • 12 Aug 2018 • Eric Balkanski, Yaron Singer
For the problem of minimizing a non-smooth convex function $f:[0, 1]^n\to \mathbb{R}$ over the unit Euclidean ball, we give a tight lower bound that shows that even when $\texttt{poly}(n)$ queries can be executed in parallel, there is no randomized algorithm with $\tilde{o}(n^{1/3})$ rounds of adaptivity that has convergence rate that is better than those achievable with a one-query-per-round algorithm.
no code implementations • ICML 2018 • Eric Balkanski, Yaron Singer
In particular, we show that under very mild conditions of curvature of a function, adaptive sampling techniques achieve an approximation arbitrarily close to 1/2 while maintaining their low adaptivity.
no code implementations • ICML 2018 • Nir Rosenfeld, Eric Balkanski, Amir Globerson, Yaron Singer
Submodular functions have become a ubiquitous tool in machine learning.
no code implementations • NeurIPS 2017 • Eric Balkanski, Yaron Singer
In this paper we consider the problem of minimizing a submodular function from training data.
no code implementations • NeurIPS 2017 • Eric Balkanski, Umar Syed, Sergei Vassilvitskii
We first show that when cost functions come from the family of submodular functions with bounded curvature, $\kappa$, the Shapley value can be approximated from samples up to a $\sqrt{1 - \kappa}$ factor, and that the bound is tight.
no code implementations • NeurIPS 2016 • Eric Balkanski, Aviad Rubinstein, Yaron Singer
In this paper we show that for any monotone submodular function with curvature c there is a (1 - c)/(1 + c - c^2) approximation algorithm for maximization under cardinality constraints when polynomially-many samples are drawn from the uniform distribution over feasible sets.
no code implementations • 19 Dec 2015 • Eric Balkanski, Aviad Rubinstein, Yaron Singer
In particular, our main result shows that there is no constant factor approximation for maximizing coverage functions under a cardinality constraint using polynomially-many samples drawn from any distribution.