no code implementations • 24 May 2022 • Elena Grigorescu, Brendan Juba, Karl Wimmer, Ning Xie
In seminal work on DPPs in Machine Learning, Kulesza conjectured in his PhD Thesis (2011) that the problem of finding a maximum likelihood DPP model for a given data set is NP-complete.
no code implementations • 11 Jun 2020 • Mahdi Cheraghchi, Elena Grigorescu, Brendan Juba, Karl Wimmer, Ning Xie
We introduce and study the model of list learning with attribute noise.
no code implementations • 1 Sep 2016 • Clément L. Canonne, Elena Grigorescu, Siyao Guo, Akash Kumar, Karl Wimmer
Our results include the following: - We demonstrate a separation between testing $k$-monotonicity and testing monotonicity, on the hypercube domain $\{0, 1\}^d$, for $k\geq 3$; - We demonstrate a separation between testing and learning on $\{0, 1\}^d$, for $k=\omega(\log d)$: testing $k$-monotonicity can be performed with $2^{O(\sqrt d \cdot \log d\cdot \log{1/\varepsilon})}$ queries, while learning $k$-monotone functions requires $2^{\Omega(k\cdot \sqrt d\cdot{1/\varepsilon})}$ queries (Blais et al. (RANDOM 2015)).
no code implementations • 21 May 2014 • Dana Dachman-Soled, Vitaly Feldman, Li-Yang Tan, Andrew Wan, Karl Wimmer
We study the notion of $\mathit{approximate}$ $\mathit{resilience}$ of Boolean functions, where we say that $f$ is $\alpha$-approximately $d$-resilient if $f$ is $\alpha$-close to a $[-1, 1]$-valued $d$-resilient function in $\ell_1$ distance.