Search Results for author: Elena Grigorescu

Found 6 papers, 0 papers with code

Learning-Augmented Algorithms for Online Linear and Semidefinite Programming

no code implementations21 Sep 2022 Elena Grigorescu, Young-San Lin, Sandeep Silwal, Maoyuan Song, Samson Zhou

We show that if the predictor is accurate, we can efficiently bypass these impossibility results and achieve a constant-factor approximation to the optimal solution, i. e., consistency.

Hardness of Maximum Likelihood Learning of DPPs

no code implementations24 May 2022 Elena Grigorescu, Brendan Juba, Karl Wimmer, Ning Xie

In seminal work on DPPs in Machine Learning, Kulesza conjectured in his PhD Thesis (2011) that the problem of finding a maximum likelihood DPP model for a given data set is NP-complete.

graph construction Point Processes

Communication-Efficient Distributed Learning of Discrete Distributions

no code implementations NeurIPS 2017 Ilias Diakonikolas, Elena Grigorescu, Jerry Li, Abhiram Natarajan, Krzysztof Onak, Ludwig Schmidt

For the case of structured distributions, such as k-histograms and monotone distributions, we design distributed learning algorithms that achieve significantly better communication guarantees than the naive ones, and obtain tight upper and lower bounds in several regimes.

Density Estimation

Testing $k$-Monotonicity

no code implementations1 Sep 2016 Clément L. Canonne, Elena Grigorescu, Siyao Guo, Akash Kumar, Karl Wimmer

Our results include the following: - We demonstrate a separation between testing $k$-monotonicity and testing monotonicity, on the hypercube domain $\{0, 1\}^d$, for $k\geq 3$; - We demonstrate a separation between testing and learning on $\{0, 1\}^d$, for $k=\omega(\log d)$: testing $k$-monotonicity can be performed with $2^{O(\sqrt d \cdot \log d\cdot \log{1/\varepsilon})}$ queries, while learning $k$-monotone functions requires $2^{\Omega(k\cdot \sqrt d\cdot{1/\varepsilon})}$ queries (Blais et al. (RANDOM 2015)).

Learning Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.