no code implementations • 21 Sep 2022 • Elena Grigorescu, Young-San Lin, Sandeep Silwal, Maoyuan Song, Samson Zhou
We show that if the predictor is accurate, we can efficiently bypass these impossibility results and achieve a constant-factor approximation to the optimal solution, i. e., consistency.
no code implementations • 24 May 2022 • Elena Grigorescu, Brendan Juba, Karl Wimmer, Ning Xie
In seminal work on DPPs in Machine Learning, Kulesza conjectured in his PhD Thesis (2011) that the problem of finding a maximum likelihood DPP model for a given data set is NP-complete.
no code implementations • 27 Dec 2021 • Jeremiah Blocki, Elena Grigorescu, Tamalika Mukherjee
Clustering is an essential primitive in unsupervised machine learning.
no code implementations • 11 Jun 2020 • Mahdi Cheraghchi, Elena Grigorescu, Brendan Juba, Karl Wimmer, Ning Xie
We introduce and study the model of list learning with attribute noise.
no code implementations • NeurIPS 2017 • Ilias Diakonikolas, Elena Grigorescu, Jerry Li, Abhiram Natarajan, Krzysztof Onak, Ludwig Schmidt
For the case of structured distributions, such as k-histograms and monotone distributions, we design distributed learning algorithms that achieve significantly better communication guarantees than the naive ones, and obtain tight upper and lower bounds in several regimes.
no code implementations • 1 Sep 2016 • Clément L. Canonne, Elena Grigorescu, Siyao Guo, Akash Kumar, Karl Wimmer
Our results include the following: - We demonstrate a separation between testing $k$-monotonicity and testing monotonicity, on the hypercube domain $\{0, 1\}^d$, for $k\geq 3$; - We demonstrate a separation between testing and learning on $\{0, 1\}^d$, for $k=\omega(\log d)$: testing $k$-monotonicity can be performed with $2^{O(\sqrt d \cdot \log d\cdot \log{1/\varepsilon})}$ queries, while learning $k$-monotone functions requires $2^{\Omega(k\cdot \sqrt d\cdot{1/\varepsilon})}$ queries (Blais et al. (RANDOM 2015)).