no code implementations • 23 Jul 2024 • Aryeh Kontorovich
In the symmetric case (sensitivity = specificity), reasonably tight bounds on the optimal error probability are known.
no code implementations • 13 Feb 2024 • Aryeh Kontorovich, Amichai Painsky
A variety of techniques are utilized and innovated upon, including Chernoff-type inequalities and empirical Bernstein bounds.
1 code implementation • 3 Oct 2023 • Matan Levi, Aryeh Kontorovich
The ability to achieve such near-optimal natural accuracy, while maintaining a significant level of robustness, makes our method applicable to real-world applications where natural accuracy is at a premium.
no code implementations • 29 Sep 2023 • Steve Hanneke, Aryeh Kontorovich, Guy Kornowski
While the recent work of Hanneke et al. (2023) established tight uniform convergence bounds for average-smooth functions in the realizable case and provided a computationally efficient realizable learning algorithm, both of these results currently lack analogs in the general agnostic (i. e. noisy) case.
no code implementations • 8 Dec 2022 • Olivier Bousquet, Haim Kaplan, Aryeh Kontorovich, Yishay Mansour, Shay Moran, Menachem Sadigurschi, Uri Stemmer
We construct a universally Bayes consistent learning rule that satisfies differential privacy (DP).
no code implementations • 7 Feb 2022 • Dan Tsir Cohen, Aryeh Kontorovich
We propose an efficient algorithm for learning mappings between two metric spaces, $\X$ and $\Y$.
no code implementations • 21 Jan 2022 • Aryeh Kontorovich, Menachem Sadigurschi, Uri Stemmer
The vast majority of the work on adaptive data analysis focuses on the case where the samples in the dataset are independent.
no code implementations • 23 Nov 2021 • László Györfi, Aryeh Kontorovich, Roi Weiss
data we identify an optimal tree $T^*$ and efficiently construct a tree density estimate $f_n$ such that, without any regularity conditions on the density $f$, one has $\lim_{n\to \infty} \int |f_n(\boldsymbol x)-f_{T^*}(\boldsymbol x)|d\boldsymbol x=0$ a. s. For Lipschitz $f$ with bounded support, $\mathbb E \left\{ \int |f_n(\boldsymbol x)-f_{T^*}(\boldsymbol x)|d\boldsymbol x\right\}=O\big(n^{-1/4}\big)$, a dimension-free rate.
no code implementations • 10 Oct 2021 • Idan Attias, Aryeh Kontorovich
We provide estimates on the fat-shattering dimension of aggregation rules of real-valued function classes.
1 code implementation • 1 Apr 2021 • Matan Levi, Idan Attias, Aryeh Kontorovich
We present a new adversarial training method, Domain Invariant Adversarial Learning (DIAL), which learns a feature representation that is both robust and domain invariant.
no code implementations • 9 Nov 2020 • Steve Hanneke, Aryeh Kontorovich
We analyze a family of supervised learning algorithms based on sample compression schemes that are stable, in the sense that removing points from the training set which were not selected for the compression set does not alter the resulting classifier.
no code implementations • 19 Oct 2020 • Ariel Avital, Klim Efremenko, Aryeh Kontorovich, David Toplin, Bo Waggoner
We propose a non-parametric variant of binary regression, where the hypothesis is regularized to be a Lipschitz function taking a metric space to [0, 1] and the loss is logarithmic.
no code implementations • 13 Jul 2020 • Yair Ashlagi, Lee-Ad Gottlieb, Aryeh Kontorovich
Rather than using the Lipschitz constant as the regularizer, we define a local slope at each point and gauge the function complexity as the average of these values.
no code implementations • 30 Mar 2020 • Daniel Berend, Aryeh Kontorovich, Lev Reyzin, Thomas Robinson
We tackle some fundamental problems in probability theory on corrupted random processes on the integer line.
no code implementations • 5 Feb 2020 • Lee-Ad Gottlieb, Eran Kaufman, Aryeh Kontorovich, Gabriel Nivasch, Ofir Pele
We propose a new embedding method which is particularly well-suited for settings where the sample size greatly exceeds the ambient dimension.
no code implementations • 4 Feb 2020 • Lee-Ad Gottlieb, Eran Kaufman, Aryeh Kontorovich
We consider the problem of cost sensitive multiclass classification, where we would like to increase the sensitivity of an important class at the expense of a less important one.
no code implementations • 7 Oct 2019 • Klim Efremenko, Aryeh Kontorovich, Moshe Noivirt
Research on nearest-neighbor methods tends to focus somewhat dichotomously either on the statistical or the computational aspects -- either on, say, Bayes consistency and rates of convergence or on techniques for speeding up the proximity search.
no code implementations • 24 Jun 2019 • Steve Hanneke, Aryeh Kontorovich, Sivan Sabato, Roi Weiss
This is the first learning algorithm known to enjoy this property; by comparison, the $k$-NN classifier and its variants are not generally universally Bayes-consistent, except under additional structural assumptions, such as an inner product, a norm, finite dimension, or a Besicovitch-type property.
no code implementations • 28 May 2019 • Hanan Zaichyk, Armin Biess, Aryeh Kontorovich, Yury Makarychev
We introduce a framework for performing regression between two Hilbert spaces.
no code implementations • 1 Feb 2019 • Geoffrey Wolfer, Aryeh Kontorovich
Furthermore, even if an eigenvalue perturbation analysis with better dependence on $d$ were available, in the non-reversible case the connection between the spectral gap and the mixing time is not nearly as straightforward as in the reversible case.
no code implementations • 31 Jan 2019 • Geoffrey Wolfer, Aryeh Kontorovich
We exhibit an efficient procedure for testing, based on a single long state sequence, whether an unknown Markov chain is identical to or $\varepsilon$-far from a given reference chain.
no code implementations • 4 Oct 2018 • Idan Attias, Aryeh Kontorovich, Yishay Mansour
For binary classification, the algorithm of Feige et al. (2015) uses a regret minimization algorithm and an ERM oracle as a black box; we adapt it for the multiclass and regression settings.
no code implementations • 3 Oct 2018 • Idan Attias, Steve Hanneke, Aryeh Kontorovich, Menachem Sadigurschi
For the $\ell_2$ loss, does every function class admit an approximate compression scheme of polynomial size in the fat-shattering dimension?
no code implementations • 13 Sep 2018 • Geoffrey Wolfer, Aryeh Kontorovich
We investigate the statistical complexity of estimating the parameters of a discrete-state Markov chain kernel from a single long sequence of state observations.
no code implementations • NeurIPS 2018 • Lee-Ad Gottlieb, Eran Kaufman, Aryeh Kontorovich, Gabriel Nivasch
We present an improved algorithm for {\em quasi-properly} learning convex polyhedra in the realizable PAC setting from data with a margin.
no code implementations • 21 May 2018 • Steve Hanneke, Aryeh Kontorovich, Menachem Sadigurschi
We give an algorithmically efficient version of the learner-to-compression scheme conversion in Moran and Yehudayoff (2016).
no code implementations • 21 May 2018 • Steve Hanneke, Aryeh Kontorovich
We establish a tight characterization of the worst-case rates for the excess risk of agnostic learning with sample compression schemes and for uniform convergence for agnostic sample compression schemes.
no code implementations • NeurIPS 2015 • Daniel Hsu, Aryeh Kontorovich, David A. Levin, Yuval Peres, Csaba Szepesvári
The interval is constructed around the relaxation time $t_{\text{relax}} = 1/\gamma$, which is strongly related to the mixing time, and the width of the interval converges to zero roughly at a $1/\sqrt{n}$ rate, where $n$ is the length of the sample path.
1 code implementation • 29 May 2017 • Eyal Gutflaish, Aryeh Kontorovich, Sivan Sabato, Ofer Biller, Oded Sofer
We learn a low-rank stationary model from the training data, and then fit a regression model for predicting the expected likelihood score of normal access patterns in the future.
no code implementations • NeurIPS 2017 • Aryeh Kontorovich, Sivan Sabato, Roi Weiss
We examine the Bayes-consistency of a recently proposed 1-nearest-neighbor-based multiclass learning algorithm.
no code implementations • 29 Jun 2016 • Aryeh Kontorovich, Iosif Pinelis
Based on this result, an exact asymptotic lower bound on the minimax EER is provided.
no code implementations • NeurIPS 2016 • Aryeh Kontorovich, Sivan Sabato, Ruth Urner
We propose a pool-based non-parametric active learning algorithm for general metric spaces, called MArgin Regularized Metric Active Nearest Neighbor (MARMANN), which outputs a nearest-neighbor classifier.
no code implementations • NeurIPS 2015 • Daniel Hsu, Aryeh Kontorovich, Csaba Szepesvári
The interval is constructed around the relaxation time $t_{\text{relax}}$, which is strongly related to the mixing time, and the width of the interval converges to zero roughly at a $\sqrt{n}$ rate, where $n$ is the length of the sample path.
no code implementations • 22 Feb 2015 • Lee-Ad Gottlieb, Aryeh Kontorovich
We initiate the rigorous study of classification in semimetric spaces, which are point sets with a distance function that is non-negative and symmetric, but need not satisfy the triangle inequality.
no code implementations • 1 Jul 2014 • Aryeh Kontorovich, Roi Weiss
We show that a simple modification of the 1-nearest neighbor classifier yields a strongly Bayes consistent learner.
no code implementations • NeurIPS 2014 • Lee-Ad Gottlieb, Aryeh Kontorovich, Pinhas Nisnevitch
We present the first sample compression algorithm for nearest neighbors with non-trivial performance guarantees.
no code implementations • 30 Jan 2014 • Aryeh Kontorovich, Roi Weiss
We prove generalization bounds that match the state of the art in sample size $n$ and significantly improve the dependence on the number of classes $k$.
no code implementations • NeurIPS 2014 • Daniel Berend, Aryeh Kontorovich
We revisit the classical decision-theoretic problem of weighted expert voting from a statistical learning perspective.
no code implementations • NeurIPS 2013 • Cosma Rohilla Shalizi, Aryeh Kontorovich
We informally call a stochastic process learnable if it admits a generalization error approaching zero in probability for any concept class with finite VC-dimension (IID processes are the simplest example).
no code implementations • 4 Sep 2013 • Aryeh Kontorovich
We prove an extension of McDiarmid's inequality for metric spaces with unbounded diameter.
no code implementations • 11 Jun 2013 • Lee-Ad Gottlieb, Aryeh Kontorovich, Robert Krauthgamer
We design a new algorithm for classification in general metric spaces, whose runtime and accuracy depend on the doubling dimension of the data points, and can thus achieve superior classification performance in many common scenarios.
no code implementations • 12 Feb 2013 • Lee-Ad Gottlieb, Aryeh Kontorovich, Robert Krauthgamer
We study adaptive data-dependent dimensionality reduction in the context of supervised learning in general metric spaces.
no code implementations • 18 Nov 2011 • Lee-Ad Gottlieb, Aryeh Kontorovich, Robert Krauthgamer
We present a framework for performing efficient regression in general metric spaces.