Search Results for author: Vasilis Kontonis

Found 18 papers, 0 papers with code

Super Non-singular Decompositions of Polynomials and their Application to Robustly Learning Low-degree PTFs

no code implementations31 Mar 2024 Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Sihan Liu, Nikos Zarifis

We study the efficient learnability of low-degree polynomial threshold functions (PTFs) in the presence of a constant fraction of adversarial corruptions.

PAC learning

Agnostically Learning Multi-index Models with Queries

no code implementations27 Dec 2023 Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis

In contrast, algorithms that rely only on random examples inherently require $d^{\mathrm{poly}(1/\epsilon)}$ samples and runtime, even for the basic problem of agnostically learning a single ReLU or a halfspace.

Dimensionality Reduction

Self-Directed Linear Classification

no code implementations6 Aug 2023 Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis

In contrast, under a worst- or random-ordering, the number of mistakes must be at least $\Omega(d \log n)$, even when the points are drawn uniformly from the unit sphere and the learner only needs to predict the labels for $1\%$ of them.

Classification

SLaM: Student-Label Mixing for Distillation with Unlabeled Examples

no code implementations NeurIPS 2023 Vasilis Kontonis, Fotis Iliopoulos, Khoa Trinh, Cenk Baykal, Gaurav Menghani, Erik Vee

Knowledge distillation with unlabeled examples is a powerful training paradigm for generating compact and lightweight student models in applications where the amount of labeled data is limited but one has access to a large pool of unlabeled data.

Knowledge Distillation

Weighted Distillation with Unlabeled Examples

no code implementations13 Oct 2022 Fotis Iliopoulos, Vasilis Kontonis, Cenk Baykal, Gaurav Menghani, Khoa Trinh, Erik Vee

Our method is hyper-parameter free, data-agnostic, and simple to implement.

Learning a Single Neuron with Adversarial Label Noise via Gradient Descent

no code implementations17 Jun 2022 Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis

For the ReLU activation, we give an efficient algorithm with sample complexity $\tilde{O}(d\, \polylog(1/\epsilon))$.

Efficient Algorithms for Learning from Coarse Labels

no code implementations22 Aug 2021 Dimitris Fotakis, Alkis Kalavasis, Vasilis Kontonis, Christos Tzamos

Our main algorithmic result is that essentially any problem learnable from fine grained labels can also be learned efficiently when the coarse data are sufficiently informative.

Learning General Halfspaces with General Massart Noise under the Gaussian Distribution

no code implementations19 Aug 2021 Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis

We study the general problem and establish the following: For $\eta <1/2$, we give a learning algorithm for general halfspaces with sample and computational complexity $d^{O_{\eta}(\log(1/\gamma))}\mathrm{poly}(1/\epsilon)$, where $\gamma =\max\{\epsilon, \min\{\mathbf{Pr}[f(\mathbf{x}) = 1], \mathbf{Pr}[f(\mathbf{x}) = -1]\} \}$ is the bias of the target halfspace $f$.

PAC learning

Convergence and Sample Complexity of SGD in GANs

no code implementations1 Dec 2020 Vasilis Kontonis, Sihan Liu, Christos Tzamos

Our main result is that by training the Generator together with a Discriminator according to the Stochastic Gradient Descent-Ascent iteration proposed by Goodfellow et al. yields a Generator distribution that approaches the target distribution of $f_*$.

Bilevel Optimization

Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU Networks

no code implementations22 Jun 2020 Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Nikos Zarifis

For the case of positive coefficients, we give the first polynomial-time algorithm for this learning problem for $k$ up to $\tilde{O}(\sqrt{\log d})$.

Open-Ended Question Answering PAC learning

Learning Halfspaces with Tsybakov Noise

no code implementations11 Jun 2020 Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis

In the Tsybakov noise model, each label is independently flipped with some probability which is controlled by an adversary.

PAC learning

Non-Convex SGD Learns Halfspaces with Adversarial Label Noise

no code implementations NeurIPS 2020 Ilias Diakonikolas, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis

We study the problem of agnostically learning homogeneous halfspaces in the distribution-specific PAC model.

Efficient Truncated Statistics with Unknown Truncation

no code implementations2 Aug 2019 Vasilis Kontonis, Christos Tzamos, Manolis Zampetakis

Our main result is a computationally and sample efficient algorithm for estimating the parameters of the Gaussian under arbitrary unknown truncation sets whose performance decays with a natural measure of complexity of the set, namely its Gaussian surface area.

Learning Powers of Poisson Binomial Distributions

no code implementations18 Jul 2017 Dimitris Fotakis, Vasilis Kontonis, Piotr Krysta, Paul Spirakis

The $k$'th power of this distribution, for $k$ in a range $[m]$, is the distribution of $P_k = \sum_{i=1}^n X_i^{(k)}$, where each Bernoulli random variable $X_i^{(k)}$ has $\mathbb{E}[X_i^{(k)}] = (p_i)^k$.

Cannot find the paper you are looking for? You can Submit a new open access paper.