1 code implementation • 31 May 2023 • Davin Choo, Themis Gouleakis, Arnab Bhattacharyya
When the advice is a DAG $G$, we design an adaptive search algorithm to recover $G^*$ whose intervention cost is at most $O(\max\{1, \log \psi\})$ times the cost for verifying $G^*$; here, $\psi$ is a distance measure between $G$ and $G^*$ that is upper bounded by the number of variables $n$, and is exactly 0 when $G=G^*$.
no code implementations • 1 Jun 2022 • Themis Gouleakis, Konstantinos Lakis, Golnoosh Shahkarami
Our algorithm for this enhanced setting obtains a 1. 33 competitive ratio with perfect predictions while also being smooth and robust, beating the lower bound of 1. 44 we show for our original prediction setting for the open variant.
1 code implementation • 17 Jul 2021 • Dimitris Fotakis, Evangelia Gergatsouli, Themis Gouleakis, Nikolas Patris
We prove that the competitive ratio decreases smoothly from sublogarithmic in the number of demands to constant, as the error, i. e., the total distance of the predicted locations to the optimal facility locations, decreases towards zero.
no code implementations • 22 Oct 2020 • Constantinos Daskalakis, Themis Gouleakis, Christos Tzamos, Manolis Zampetakis
We provide a computationally and statistically efficient estimator for the classical problem of truncated linear regression, where the dependent variable $y = w^T x + \epsilon$ and its corresponding vector of covariates $x \in R^k$ are only revealed if the dependent variable falls in some subset $S \subseteq R$; otherwise the existence of the pair $(x, y)$ is hidden.
no code implementations • 18 Oct 2020 • Ioannis Anagnostides, Themis Gouleakis, Ali Marashian
This work provides several new insights on the robustness of Kearns' statistical query framework against challenging label-noise models.
no code implementations • 14 Sep 2020 • Ilias Diakonikolas, Themis Gouleakis, Daniel M. Kane, John Peebles, Eric Price
To illustrate the generality of our methods, we give optimal algorithms for testing collections of distributions and testing closeness with unequal sized samples.
no code implementations • 6 Jul 2019 • Maryam Aliakbarpour, Themis Gouleakis, John Peebles, Ronitt Rubinfeld, Anak Yodpinyanee
We then build on these lower bounds to give $\Omega(n/\log{n})$ lower bounds for testing monotonicity over a matching poset of size $n$ and significantly improved lower bounds over the hypercube poset.
no code implementations • NeurIPS 2019 • Ilias Diakonikolas, Themis Gouleakis, Christos Tzamos
The goal is to find a hypothesis $h$ that minimizes the misclassification error $\mathbf{Pr}_{(\mathbf{x}, y) \sim \mathcal{D}} \left[ h(\mathbf{x}) \neq y \right]$.
no code implementations • 11 Jun 2019 • Ilias Diakonikolas, Themis Gouleakis, Daniel M. Kane, Sankeerth Rao
We study distribution testing with communication and memory constraints in the following computational models: (1) The {\em one-pass streaming model} where the goal is to minimize the sample complexity of the protocol subject to a memory constraint, and (2) A {\em distributed model} where the data samples reside at multiple machines and the goal is to minimize the communication cost of the protocol.
no code implementations • 11 Sep 2018 • Constantinos Daskalakis, Themis Gouleakis, Christos Tzamos, Manolis Zampetakis
We provide an efficient algorithm for the classical problem, going back to Galton, Pearson, and Fisher, of estimating, with arbitrary accuracy the parameters of a multivariate normal distribution from truncated samples.
no code implementations • 9 Aug 2017 • Ilias Diakonikolas, Themis Gouleakis, John Peebles, Eric Price
Our new upper and lower bounds show that the optimal sample complexity of identity testing is \[ \Theta\left( \frac{1}{\epsilon^2}\left(\sqrt{n \log(1/\delta)} + \log(1/\delta) \right)\right) \] for any $n, \varepsilon$, and $\delta$.
no code implementations • 11 Nov 2016 • Ilias Diakonikolas, Themis Gouleakis, John Peebles, Eric Price
We study the fundamental problems of (i) uniformity testing of a discrete distribution, and (ii) closeness testing between two discrete distributions with bounded $\ell_2$-norm.
no code implementations • 24 Apr 2015 • Clément Canonne, Themis Gouleakis, Ronitt Rubinfeld
We then focus on the question of whether algorithms for sampling correctors can be more efficient in terms of sample complexity than learning algorithms for the analogous families of distributions.