no code implementations • NeurIPS 2023 • Ilias Diakonikolas, Daniel Kane, Lisheng Ren, Yuxin Sun
In particular, we prove near-optimal SQ lower bounds for NGCA under the moment-matching condition only.
no code implementations • 25 Feb 2023 • Daniel Kane, Sihan Liu, Shachar Lovett, Gaurav Mahajan, Csaba Szepesvári, Gellért Weisz
The rewards in this game are chosen such that if the learner achieves large reward, then the learner's actions can be used to simulate solving a variant of 3-SAT, where (a) each variable shows up in a bounded number of clauses (b) if an instance has no solutions then it also has no solutions that satisfy more than (1-$\epsilon$)-fraction of clauses.
no code implementations • 6 Mar 2022 • Alaa Maalouf, Murad Tukan, Eric Price, Daniel Kane, Dan Feldman
The goal (e. g., for anomaly detection) is to approximate the $n$ points received so far in $P$ by a single frequency $\sin$, e. g. $\min_{c\in C}cost(P, c)+\lambda(c)$, where $cost(P, c)=\sum_{i=1}^n \sin^2(\frac{2\pi}{N} p_ic)$, $C\subseteq [N]$ is a feasible set of solutions, and $\lambda$ is a given regularization function.
no code implementations • 11 Feb 2022 • Daniel Kane, Sihan Liu, Shachar Lovett, Gaurav Mahajan
In this work, we make progress on this open problem by presenting the first computational lower bound for RL with linear function approximation: unless NP=RP, no randomized polynomial time algorithm exists for deterministic transition MDPs with a constant number of actions and linear optimal value functions.
no code implementations • 14 Jun 2021 • Ilias Diakonikolas, Russell Impagliazzo, Daniel Kane, Rex Lei, Jessica Sorrell, Christos Tzamos
Our upper and lower bounds characterize the complexity of boosting in the distribution-independent PAC model with Massart noise.
no code implementations • 9 Feb 2021 • Max Hopkins, Daniel Kane, Shachar Lovett, Michal Moshkovitz
The explosive growth of easily-accessible unlabeled data has lead to growing interest in active learning, a paradigm in which data-hungry learning algorithms adaptively select informative examples in order to lower prohibitively expensive labeling costs.
no code implementations • 6 Jan 2021 • Daniel Kane, Andreas Fackler, Adam Gągol, Damian Straszak, Vlad Zamfir
We propose Highway, a new consensus protocol that is safe and live in the classical partially synchronous BFT model, while at the same time offering practical improvements over existing solutions.
Distributed, Parallel, and Cluster Computing Cryptography and Security
no code implementations • 13 May 2020 • Ilias Diakonikolas, Samuel B. Hopkins, Daniel Kane, Sushrut Karmalkar
The key ingredients of this proof are a novel use of SoS-certifiable anti-concentration and a new characterization of pairs of Gaussians with small (dimension-independent) overlap in terms of their parameter distance.
no code implementations • 15 Jan 2020 • Max Hopkins, Daniel Kane, Shachar Lovett, Gaurav Mahajan
With the explosion of massive, widely available unlabeled data in the past years, finding label and time efficient, robust learning algorithms has become ever more important in theory and in practice.
no code implementations • NeurIPS 2019 • Maryam Aliakbarpour, Ilias Diakonikolas, Daniel Kane, Ronitt Rubinfeld
In this paper, we use the framework of property testing to design algorithms to test the properties of the distribution that the data is drawn from with respect to differential privacy.
3 code implementations • NeurIPS 2019 • Ilias Diakonikolas, Sushrut Karmalkar, Daniel Kane, Eric Price, Alistair Stewart
Specifically, we focus on the fundamental problems of robust sparse mean estimation and robust sparse PCA.
no code implementations • 18 Nov 2019 • Venkata Gandikota, Daniel Kane, Raj Kumar Maity, Arya Mazumdar
In this work, we present a family of vector quantization schemes \emph{vqSGD} (Vector-Quantized Stochastic Gradient Descent) that provide an asymptotic reduction in the communication cost with convergence guarantees in first-order distributed optimization.
no code implementations • 10 Feb 2019 • Olivier Bousquet, Daniel Kane, Shay Moran
We complement and extend this result by showing that: (i) the factor 3 can not be improved if one restricts the algorithm to output a density from $\mathcal{Q}$, and (ii) if one allows the algorithm to output arbitrary densities (e. g.\ a mixture of densities from $\mathcal{Q}$), then the approximation factor can be reduced to 2, which is optimal.
no code implementations • 10 Aug 2017 • Daniel Kane, Sushrut Karmalkar, Eric Price
We consider the problem of robust polynomial regression, where one receives samples $(x_i, y_i)$ that are usually within $\sigma$ of a polynomial $y = p(x)$, but have a $\rho$ chance of being arbitrary adversarial outliers.
no code implementations • 9 Dec 2016 • Clement Canonne, Ilias Diakonikolas, Daniel Kane, Alistair Stewart
This work initiates a systematic investigation of testing high-dimensional structured distributions by focusing on testing Bayesian networks -- the prototypical family of directed graphical models.
1 code implementation • NeurIPS 2018 • Yu Cheng, Ilias Diakonikolas, Daniel Kane, Alistair Stewart
We investigate the problem of learning Bayesian networks in a robust model where an $\epsilon$-fraction of the samples are adversarially corrupted.
2 code implementations • 21 Apr 2016 • Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Jerry Li, Ankur Moitra, Alistair Stewart
We study high-dimensional distribution learning in an agnostic setting where an adversary is allowed to arbitrarily corrupt an $\varepsilon$-fraction of the samples.