Search Results for author: Daniel Kane

Found 17 papers, 3 papers with code

SQ Lower Bounds for Non-Gaussian Component Analysis with Weaker Assumptions

no code implementations NeurIPS 2023 Ilias Diakonikolas, Daniel Kane, Lisheng Ren, Yuxin Sun

In particular, we prove near-optimal SQ lower bounds for NGCA under the moment-matching condition only.

Exponential Hardness of Reinforcement Learning with Linear Function Approximation

no code implementations25 Feb 2023 Daniel Kane, Sihan Liu, Shachar Lovett, Gaurav Mahajan, Csaba Szepesvári, Gellért Weisz

The rewards in this game are chosen such that if the learner achieves large reward, then the learner's actions can be used to simulate solving a variant of 3-SAT, where (a) each variable shows up in a bounded number of clauses (b) if an instance has no solutions then it also has no solutions that satisfy more than (1-$\epsilon$)-fraction of clauses.

Learning Theory reinforcement-learning +1

Coresets for Data Discretization and Sine Wave Fitting

no code implementations6 Mar 2022 Alaa Maalouf, Murad Tukan, Eric Price, Daniel Kane, Dan Feldman

The goal (e. g., for anomaly detection) is to approximate the $n$ points received so far in $P$ by a single frequency $\sin$, e. g. $\min_{c\in C}cost(P, c)+\lambda(c)$, where $cost(P, c)=\sum_{i=1}^n \sin^2(\frac{2\pi}{N} p_ic)$, $C\subseteq [N]$ is a feasible set of solutions, and $\lambda$ is a given regularization function.

Anomaly Detection

Computational-Statistical Gaps in Reinforcement Learning

no code implementations11 Feb 2022 Daniel Kane, Sihan Liu, Shachar Lovett, Gaurav Mahajan

In this work, we make progress on this open problem by presenting the first computational lower bound for RL with linear function approximation: unless NP=RP, no randomized polynomial time algorithm exists for deterministic transition MDPs with a constant number of actions and linear optimal value functions.

reinforcement-learning Reinforcement Learning (RL)

Boosting in the Presence of Massart Noise

no code implementations14 Jun 2021 Ilias Diakonikolas, Russell Impagliazzo, Daniel Kane, Rex Lei, Jessica Sorrell, Christos Tzamos

Our upper and lower bounds characterize the complexity of boosting in the distribution-independent PAC model with Massart noise.

Bounded Memory Active Learning through Enriched Queries

no code implementations9 Feb 2021 Max Hopkins, Daniel Kane, Shachar Lovett, Michal Moshkovitz

The explosive growth of easily-accessible unlabeled data has lead to growing interest in active learning, a paradigm in which data-hungry learning algorithms adaptively select informative examples in order to lower prohibitively expensive labeling costs.

Active Learning

Highway: Efficient Consensus with Flexible Finality

no code implementations6 Jan 2021 Daniel Kane, Andreas Fackler, Adam Gągol, Damian Straszak, Vlad Zamfir

We propose Highway, a new consensus protocol that is safe and live in the classical partially synchronous BFT model, while at the same time offering practical improvements over existing solutions.

Distributed, Parallel, and Cluster Computing Cryptography and Security

Robustly Learning any Clusterable Mixture of Gaussians

no code implementations13 May 2020 Ilias Diakonikolas, Samuel B. Hopkins, Daniel Kane, Sushrut Karmalkar

The key ingredients of this proof are a novel use of SoS-certifiable anti-concentration and a new characterization of pairs of Gaussians with small (dimension-independent) overlap in terms of their parameter distance.

Clustering

Noise-tolerant, Reliable Active Classification with Comparison Queries

no code implementations15 Jan 2020 Max Hopkins, Daniel Kane, Shachar Lovett, Gaurav Mahajan

With the explosion of massive, widely available unlabeled data in the past years, finding label and time efficient, robust learning algorithms has become ever more important in theory and in practice.

Active Learning Classification +1

Private Testing of Distributions via Sample Permutations

no code implementations NeurIPS 2019 Maryam Aliakbarpour, Ilias Diakonikolas, Daniel Kane, Ronitt Rubinfeld

In this paper, we use the framework of property testing to design algorithms to test the properties of the distribution that the data is drawn from with respect to differential privacy.

vqSGD: Vector Quantized Stochastic Gradient Descent

no code implementations18 Nov 2019 Venkata Gandikota, Daniel Kane, Raj Kumar Maity, Arya Mazumdar

In this work, we present a family of vector quantization schemes \emph{vqSGD} (Vector-Quantized Stochastic Gradient Descent) that provide an asymptotic reduction in the communication cost with convergence guarantees in first-order distributed optimization.

Distributed Optimization Quantization

The Optimal Approximation Factor in Density Estimation

no code implementations10 Feb 2019 Olivier Bousquet, Daniel Kane, Shay Moran

We complement and extend this result by showing that: (i) the factor 3 can not be improved if one restricts the algorithm to output a density from $\mathcal{Q}$, and (ii) if one allows the algorithm to output arbitrary densities (e. g.\ a mixture of densities from $\mathcal{Q}$), then the approximation factor can be reduced to 2, which is optimal.

Density Estimation

Robust polynomial regression up to the information theoretic limit

no code implementations10 Aug 2017 Daniel Kane, Sushrut Karmalkar, Eric Price

We consider the problem of robust polynomial regression, where one receives samples $(x_i, y_i)$ that are usually within $\sigma$ of a polynomial $y = p(x)$, but have a $\rho$ chance of being arbitrary adversarial outliers.

regression

Testing Bayesian Networks

no code implementations9 Dec 2016 Clement Canonne, Ilias Diakonikolas, Daniel Kane, Alistair Stewart

This work initiates a systematic investigation of testing high-dimensional structured distributions by focusing on testing Bayesian networks -- the prototypical family of directed graphical models.

Robust Learning of Fixed-Structure Bayesian Networks

1 code implementation NeurIPS 2018 Yu Cheng, Ilias Diakonikolas, Daniel Kane, Alistair Stewart

We investigate the problem of learning Bayesian networks in a robust model where an $\epsilon$-fraction of the samples are adversarially corrupted.

Robust Estimators in High Dimensions without the Computational Intractability

2 code implementations21 Apr 2016 Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Jerry Li, Ankur Moitra, Alistair Stewart

We study high-dimensional distribution learning in an agnostic setting where an adversary is allowed to arbitrarily corrupt an $\varepsilon$-fraction of the samples.

Vocal Bursts Intensity Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.