no code implementations • 25 Sep 2023 • Periklis Theodoropoulos, Konstantinos E. Nikolakakis, Dionysis Kalogerias
Federated Learning (FL) is a decentralized machine learning framework that enables collaborative model training while respecting data privacy.
no code implementations • 28 May 2023 • Patrik Okanovic, Roger Waleffe, Vasilis Mageirakos, Konstantinos E. Nikolakakis, Amin Karbasi, Dionysis Kalogerias, Nezihe Merve Gürel, Theodoros Rekatsinas
Methods for carefully selecting or generating a small set of training data to learn from, i. e., data pruning, coreset selection, and data distillation, have been shown to be effective in reducing the ever-increasing cost of training neural networks.
no code implementations • 3 May 2023 • Konstantinos E. Nikolakakis, Amin Karbasi, Dionysis Kalogerias
We establish matching upper and lower generalization error bounds for mini-batch Gradient Descent (GD) training with either deterministic or stochastic, data-independent, but otherwise arbitrary batch selection rules.
no code implementations • 26 Apr 2022 • Konstantinos E. Nikolakakis, Farzin Haddadpour, Amin Karbasi, Dionysios S. Kalogerias
For nonconvex smooth losses, we prove that full-batch GD efficiently generalizes close to any stationary point at termination, and recovers the generalization error guarantees of stochastic algorithms with fewer assumptions.
no code implementations • 14 Feb 2022 • Konstantinos E. Nikolakakis, Farzin Haddadpour, Dionysios S. Kalogerias, Amin Karbasi
These bounds coincide with those for SGD, and rather surprisingly are independent of $d$, $K$ and the batch size $m$, under appropriate choices of a slightly decreased learning rate.
no code implementations • 20 Sep 2019 • Konstantinos E. Nikolakakis, Dionysios S. Kalogerias, Anand D. Sarwate
Specifically, we show that the finite sample complexity of the Chow-Liu algorithm for ensuring exact structure recovery from noisy data is inversely proportional to the information threshold squared (provided it is positive), and scales almost logarithmically relative to the number of nodes over a given probability of failure.
no code implementations • 11 Dec 2018 • Konstantinos E. Nikolakakis, Dionysios S. Kalogerias, Anand D. Sarwate
In the absence of noise, predictive learning on Ising models was recently studied by Bresler and Karzand (2020); this paper quantifies how noise in the hidden model impacts the tasks of structure recovery and marginal distribution estimation by proving upper and lower bounds on the sample complexity.