Search Results for author: Konstantinos E. Nikolakakis

Found 7 papers, 0 papers with code

Federated Learning Under Restricted User Availability

no code implementations25 Sep 2023 Periklis Theodoropoulos, Konstantinos E. Nikolakakis, Dionysis Kalogerias

Federated Learning (FL) is a decentralized machine learning framework that enables collaborative model training while respecting data privacy.

Federated Learning

Repeated Random Sampling for Minimizing the Time-to-Accuracy of Learning

no code implementations28 May 2023 Patrik Okanovic, Roger Waleffe, Vasilis Mageirakos, Konstantinos E. Nikolakakis, Amin Karbasi, Dionysis Kalogerias, Nezihe Merve Gürel, Theodoros Rekatsinas

Methods for carefully selecting or generating a small set of training data to learn from, i. e., data pruning, coreset selection, and data distillation, have been shown to be effective in reducing the ever-increasing cost of training neural networks.

Data Compression

Select without Fear: Almost All Mini-Batch Schedules Generalize Optimally

no code implementations3 May 2023 Konstantinos E. Nikolakakis, Amin Karbasi, Dionysis Kalogerias

We establish matching upper and lower generalization error bounds for mini-batch Gradient Descent (GD) training with either deterministic or stochastic, data-independent, but otherwise arbitrary batch selection rules.

Beyond Lipschitz: Sharp Generalization and Excess Risk Bounds for Full-Batch GD

no code implementations26 Apr 2022 Konstantinos E. Nikolakakis, Farzin Haddadpour, Amin Karbasi, Dionysios S. Kalogerias

For nonconvex smooth losses, we prove that full-batch GD efficiently generalizes close to any stationary point at termination, and recovers the generalization error guarantees of stochastic algorithms with fewer assumptions.

Black-Box Generalization: Stability of Zeroth-Order Learning

no code implementations14 Feb 2022 Konstantinos E. Nikolakakis, Farzin Haddadpour, Dionysios S. Kalogerias, Amin Karbasi

These bounds coincide with those for SGD, and rather surprisingly are independent of $d$, $K$ and the batch size $m$, under appropriate choices of a slightly decreased learning rate.

Generalization Bounds

Optimal Rates for Learning Hidden Tree Structures

no code implementations20 Sep 2019 Konstantinos E. Nikolakakis, Dionysios S. Kalogerias, Anand D. Sarwate

Specifically, we show that the finite sample complexity of the Chow-Liu algorithm for ensuring exact structure recovery from noisy data is inversely proportional to the information threshold squared (provided it is positive), and scales almost logarithmically relative to the number of nodes over a given probability of failure.

Predictive Learning on Hidden Tree-Structured Ising Models

no code implementations11 Dec 2018 Konstantinos E. Nikolakakis, Dionysios S. Kalogerias, Anand D. Sarwate

In the absence of noise, predictive learning on Ising models was recently studied by Bresler and Karzand (2020); this paper quantifies how noise in the hidden model impacts the tasks of structure recovery and marginal distribution estimation by proving upper and lower bounds on the sample complexity.

Cannot find the paper you are looking for? You can Submit a new open access paper.